News this Week

Science  24 Nov 2000:
Vol. 290, Issue 5496, pp. 1474
  1. LABORATORY ANIMALS

    Researchers Pained by Effort to Define Distress Precisely

    1. Constance Holden

    When does stress turn into distress? The U.S. government wants to clarify that question and others affecting the care of millions of animals being used in research. But the 2600 pieces of advice it received during a 4-month period that ended earlier this month suggest its job won't be easy. The comments highlight a deep split between animal activists, who see the potential new regulations as a step toward eliminating all painful procedures, and most researchers, who say that the present system is working well and that no major changes are needed.

    The U.S. Department of Agriculture (USDA) requested the comments to help it decide whether to adopt a formal definition of “distress” as part of its responsibilities under the Animal Welfare Act. Although the act seeks to “minimize pain and distress,” says Ron DeHaven, deputy administrator of USDA's Animal and Plant Health Inspection Service (APHIS), the regulations to date have “focused on pain and have not given equal consideration to distress.” To remedy that situation, APHIS came up with a working definition of distress: “a state in which an animal cannot escape from or adapt to the internal or external stressors or conditions it experiences, resulting in negative effects on its well-being.” In July it asked for public comments, the first step in a long process that will likely culminate in new regulations.

    APHIS would also like to fill in what DeHaven calls “gaps” in the reporting system used by research facilities. Current regulations require the labs to divide animals into three categories: those that do not undergo painful procedures; those that do and are given pain-relieving drugs; and those used in experiments, such as rodents for drug testing, that preclude relief because it would compromise the study. APHIS says these categories fall short by ignoring the duration and intensity of pain, the effectiveness of pain control, and palliative measures other than painkillers, anesthetics, or tranquilizers.

    The comment period ended on 7 November. Although the volume paled in comparison with the torrent of reaction, most of it negative, to the government's proposal to add rats, mice, and birds to the welfare act (Science, 6 October, p. 23), some researchers believe that the changes could have an even greater impact on science. They cover “not just reporting activities but also … how the IACUCs [institutional animal care and use committees] do their business,” says immunologist Robert Rich of Emory University in Atlanta, Georgia, president-elect of the Federation of American Societies for Experimental Biology (FASEB). Detailed regulatory prescriptions, he and others argue, might make the paperwork load intolerable and hobble IACUCs in exercising their best scientific judgment.

    Researchers also question APHIS's attempt to define distress. It is “vague and could lead to widely varying, highly subjective interpretations,” FASEB wrote USDA. “There are no simple physiological or behavioral criteria to mark the point where an animal that experiences stress becomes distressed.”

    Scientists are less united on how best to classify animals' exposure to pain and distress. “A lot of what is being talked about is already in place in the labs of responsible investigators,” says Randall Nelson of the University of Tennessee, Memphis, chair of the council for the Institute for Laboratory Animal Resources at the National Research Council. In addition to arguing that further tinkering with the rules is unnecessary, many researchers accuse APHIS of bowing to pressure from the animal rights community. It's “all part of a plan to stop all biomedical research” by drowning it in costly bureaucracy, says Joseph R. Haywood, a pharmacology professor at the University of Texas Health Science Center in San Antonio.

    Animal welfare groups don't deny that a discussion about new rules provides an opportunity to advance their agendas, which range from reduction to outright elimination of the use of animals in research. The Humane Society of the United States (HSUS), for example, wants to eradicate animal pain and distress by 2020 as part of a larger effort to “eliminate harm to animals” in research, education, and testing. Toward that goal, the society strongly favors more explicit rules for pain and distress reporting. HSUS has proposed a system, similar to one in the United Kingdom, that would group animals that undergo procedures without painkilling drugs into three classes based on whether the pain is mild, moderate, or severe.

    Some scientists also support refining the reporting categories. Anesthesiologist Alicia Karas of Tufts School of Veterinary Medicine in North Grafton, Massachusetts, for example, says that changes would help both APHIS inspectors and lab workers by making the monitoring system more objective and systematic. Typically, says Karas, “somebody walks by and makes a check mark somewhere to see if an animal looks OK after a procedure.” She has developed a checklist for research dogs at Tufts that covers physical problems, such as diarrhea and loss of appetite, and behavioral signs, such as irritability.

    Other scientists have been working on objective indices of rodent pain. David Morton of the University of Birmingham in the U.K., for example, has developed a form for rats that covers behavioral signs such as not grooming, “walking on tiptoe,” squeaking on being touched, and observations of “hunched posture,” scruffy coat, and weight loss. He believes that forms adapted for a number of species including fish and pigs “have improved animal care, because they indicate in an objective fashion when animals are not ‘right.’”

    DeHaven predicts that the “smoldering” issue of pain and distress will become increasingly visible in the next few years. It may also become even more contentious. With research in the postgenome world demanding a skyrocketing number of rats and mice, says Rich, “anyone who believes that we're going to eliminate pain and distress over the next 20 years just does not understand biological science.”

  2. MICROBIOLOGY

    Unlocking the Secrets of the Grim Reaper

    1. Kathryn Brown*
    1. Kathryn Brown is a writer in Alexandria, Virginia.

    During the 14th century, the Black Death swept Asia and Europe, littering streets with corpses. Why, the people wondered? Was the Plague the work of an angry God? A medieval curse? As it happens, the real culprit was a tiny bacterium: Yersinia pestis. Centuries later, scientists are still learning how Y. pestis does its dirty work. Now, on page 1594, a new study adds another piece to the puzzle. The work may also shed light on the modus operandi of other noxious bacteria.

    In the study, researchers led by biochemist Jack Dixon of the University of Michigan, Ann Arbor, unveil the workings of YopJ, one of a group of nefarious proteins that Yersinia bacteria, including Y. pestis, deploy. Early in the disease, Yersinia bacteria inject YopJ into macrophage cells of the immune system, beginning the cycle of destruction. There, the researchers have now found, YopJ acts like scissors—or more precisely, a cysteine protease—cleaving the macrophage proteins that would normally send an SOS to other immune cells for help. “YopJ cuts the macrophage's communication lines,” says Dixon. Leaving the macrophage stranded, he adds, YopJ clears the way for other Yersinia proteins to destroy the cell.

    “Very nice,” says microbiologist Brett Finlay, of the University of British Columbia in Vancouver, about the new study. Bit by bit, Finlay says, researchers are unraveling the half-dozen virulent proteins that pack Yersinia's punch. “This work takes us a step farther, defining the molecular machinery that allows YopJ to quiet the immune system,” Finlay says.

    Yersinia is infamous for causing the bubonic plague, which still affects at least 1000 people a year—and its brilliant biochemistry. Rather than climb inside macrophage cells, the bacteria hover outside and, like a molecular syringe, shoot six Yops (Yersinia outer proteins) into the cells. In itself a team of assassins, each Yop protein plays a distinct role in killing the cell. Last year, Dixon and his colleagues reported (Science, 17 September 1999, p. 1920) that YopJ blocks two signaling pathways—mitogen-activated protein kinase (MAPK) and nuclear factor κB—that the macrophage activates to send an SOS, in the form of tumor necrosis factor (TNF), to the immune system's B and T cells.

    But how, exactly, does YopJ accomplish this? That question inspired the new study. To find out how YopJ works, the researchers ran computer programs that predicted YopJ's structure, based on its amino acid sequence. That analysis showed that YopJ closely resembles adenovirus protease (AVP), a well-known viral protein, as well as the AvrBsT protein in plant bacteria. These proteins all share a stretch of four amino acids—and it's this conserved catalytic region, Dixon knew, that makes AVP a cysteine protease and determines which substrates it can cleave. By extension, he reasoned, YopJ and AvrBsT must also be cysteine proteases, acting on similar substrates.

    To test that idea, the researchers created three YopJ mutants, each altered in the key catalytic region shared with AVP. Without this region intact, the team found, each YopJ mutant failed to block the MAPK pathway—and stop production of TNF—in macrophage cells. By contrast, wild-type YopJ proteins stopped MAPK activity cold, silencing the macrophage's call for help. “Without this working catalytic site,” Dixon concludes, “YopJ can't do its job.”

    To hammer the point home, Dixon decided to test the same mutations in AvrBsT, the plant homolog. He called Brian Staskawicz, a plant biologist at the University of California, Berkeley. And sure enough, when Staskawicz's lab mutated the same catalytic region in AvrBsT, that protein also failed to provoke its usual host response—in this case, localized cell death in tobacco leaves. Like YopJ, AvrBsT appears to be a cysteine protease, Staskawicz says. The first lurks behind the Black Death; the second, black spot disease. What's striking, adds Staskawicz, is that bacteria have evolved to unleash the same biochemistry in such different settings. Dixon adds: “Basically, the same thing that killed millions in Europe is sitting in your front yard.” Not to mention elsewhere—YopJ homologs can also be found in bacteria such as salmonella and Rhizobium.

    The Yersinia saga continues. Dixon's team is still searching for the unlucky host proteins cleaved by YopJ and AvrBsT. The current study suggests that both clip an assortment of so-called ubiquitin-like proteins used for signaling inside cells. The researchers hope to catch this action in vitro. Until they do, some hesitate to label YopJ a cysteine protease. “This is a very interesting idea, but more work needs to be done,” cautions microbiologist Olaf Schneewind of the University of California, Los Angeles. Even now, Y. pestis perplexes the crowd.

  3. BIOCHEMISTRY

    Ligand Chip Helps Manipulate Cells

    1. Yudhijit Bhattacharjee*
    1. Yudhijit Bhattacharjee writes from Columbus, Ohio.

    Scientists at the University of Chicago have found an electrochemical way to control interactions between cells and a chip. It's an improvement over existing methods of influencing cell behavior, none of which allows control over cell-surface activities with chip-bound molecules. Researchers say the achievement could be a key step toward a wide variety of applications—everything from new drug assays to prosthetic aids that replace damaged neurons.

    The behavior of cells is controlled by any of a large number of different receptors on their surfaces that respond to hormones, growth factors, and various other types of regulators. Chicago's Milan Mrksich and his colleagues devised a way to first attach certain peptide regulators, or ligands, to their chip, and then to release them, through electrical means. Because cells bind to the ligands, this allowed the researchers to turn the chip on for cell attachment or turn it off, thus releasing the attached cells. “The technique would be useful for studying subjects in cell biology like cell migration that depend critically on the cell's surface composition,” says synthetic chemist George Whitesides of Harvard University, a former mentor of Mrksich. “In the longer term, it could serve as a basis for new types of assays for drug screening.”

    Researchers have previously designed ligand-coated substrates for attaching cells, but these have been limited by the static composition of their surfaces. Mrksich wanted to develop a dynamic substrate—a chip that could actively modulate the behavior of attached cells. To achieve that, he and his colleagues coated a gold film with an alkanethiolate, a type of organic molecule that carries hydroquinone groups. Applying a tiny electric voltage to the film triggers an electrochemical reaction, converting the hydroquinone groups to quinones. When molecules containing a ligand bound to another organic group called cyclopentadiene are added to the chip, they will then react with the quinones. This reaction installs the ligand on the surface of the chip, thereby turning it on for cell attachment. Reapplying the electric voltage to the film converts the quinone groups back to hydroquinones, thus causing tethered ligands to break away and release the cells from the substrate—in effect, turning off the chip.

    Mrksich and his colleagues have shown that they can induce cells to migrate on the chip by taking advantage of its switching-on aspect. They first coated small circular regions on the gold substrate with a protein called fibronectin that causes connective tissue cells called fibroblasts to adhere to the spots, leaving the remaining areas covered with alkanethiolates. Then the researchers turned on the chip in the presence of a solution containing a conjugate of cyclopentadiene and a small peptide known to mediate cell adhesion. This caused the conjugate to bind to the chip and, as a result, the cells moved out from the circles and lodged themselves uniformly across its surface.

    Such a system can be used, Mrksich says, to screen either for drugs that promote cell migration, which might be helpful in wound healing, or for drugs that inhibit it, which might have antimetastatic effects. For example, the team has tested two compounds, known for their antimigratory properties, and found that both blocked cell migration on the chip. “One goal of the work is to improve the design of cell-based sensors for drug screening,” says Mrksich, whose results are in press at Angewandte Chemie.

    Strategies for precise and selective cell attachments, such as the one used by Mrksich's group, should allow scientists to engineer small surfaces with multiple cell types. That could help in the miniaturization of cell-based sensors and thereby speed up drug screening. “The technology would enable us to screen drug candidates at a much more specific and fine level than what is possible with traditional assays,” says Tom Schall, founder of ChemoCentryx Inc., a drug-discovery company based in San Carlos, California.

    In addition, Mrksich and his colleagues say the technique should work with any ligand, making it useful for studying a wide range of cellular behaviors. These would include cell differentiation during embryonic development and programmed cell death, which culls damaged or excess cells, as well as cell migration. Indeed, says Nicole Sampson, a chemist at the State University of New York, Stony Brook, the chips should provide a handy tool to “help determine the mechanisms of cellular signaling.”

  4. CELL BIOLOGY

    Color-Changing Protein Times Gene Activity

    1. Marina Chicurel*
    1. Marina Chicurel is a freelance writer in Santa Cruz, California.

    For years, researchers trying to capture the bustling activity of genes in living cells have mostly had to make do with snapshots, which clearly fell short of the mark. But now their frustration may be over. On page 1585, a team led by Alexey Terskikh and Irving Weissman of Stanford University and Paul Siebert at Clonetech Laboratories in Palo Alto, California, describes a new fluorescent protein that turns bright green when it is first made, then changes to red over several hours—providing the ability to witness how genes alter their activities over time.

    This new fluorescent timer should be widely applicable, enabling developmental biologists, for instance, to monitor how the activities of genes change as cells migrate in the developing embryo. “If a picture is worth 1000 words, then a movie should be worth a novel,” says developmental biologist Randall Moon of the University of Washington, Seattle.

    The Stanford-Clonetech group's work builds on the discovery last year, by Mikhail Matz and colleagues at the Russian Academy of Sciences in Moscow, of a fascinating protein, dubbed drFP583, found in coral. The researchers identified the protein based on the similarity of its sequence to that of “green fluorescent protein,” which is widely used to track proteins and cells in living organisms. But drFP583 glowed a bright red. Thus, it could add a new color to cell biologists' palette of protein markers, enabling them to more easily study two separately labeled proteins at once and track their interactions.

    The researchers did not realize at first, however, that there is more to drFP583 than just a new red-colored tag. In fact, recent work described in the 24 October issue of the Proceedings of the National Academy of Sciences by Roger Tsien's group at the University of California, San Diego, and also by Watt Webb at Cornell University in Ithaca, New York, suggested that the protein had some significant drawbacks. Among other things, drFP583 is a very dim green right after synthesis and takes hours to days to develop a red color intense enough to be of practical use. Biologists tracking fast-paced changes in gene expression usually can't afford to wait so long.

    Color shift.

    At 11.5 hours after E5 expression in C. elegans was activated by heat shock (bottom panels), the protein's green color has decreased compared with that at 7 hours (top panels), and its red color has increased.

    CREDIT: ALEXEY TERSKIKH ET AL.

    Shortly after drFP583's discovery, Terskikh generated a collection of mutants, hoping to find proteins that were brighter and developed their color faster. He soon spotted several mutants worthy of attention—and one apparent inconsistency in his record keeping. He noticed that E5, a mutant he had labeled bright green one day, turned a deep fluorescent red the next day. Assuming he had made a mistake in recording the color the first time, Terskikh repeated the experiment and regrew the stock. But once again, the mutant appeared green at first and turned red the next day.

    The change occurred no faster than that of the unaltered drFP583. But E5's initial green fluorescence is much brighter than that of its parent protein, making it easily detectable. To their delight, Terskikh and his colleagues realized they now had a protein that could be used as a fluorescent timer of gene activity. They could simply attach the E5 coding sequence to regulatory sequences of the gene they wanted to track, and the color of the resulting protein would provide an estimate of when the gene had become active. In this context, the protein's delayed color shift, originally considered a detriment, became an asset, because its timing function would have been lost if it turned red immediately after synthesis. “These people, you might say, have figured out how to make lemonade out of a lemon,” says Tsien.

    Terskikh and his colleagues have since gone on to test their fluorescent timer in living organisms. In one set of experiments, they placed the E5 gene under the control of a regulatory sequence from a gene for a so-called heat shock protein; this sequence causes the genes containing them to be turned on by a boost in temperature. The researchers then introduced the hybrid gene into the roundworm Caenorhabditis elegans and monitored its behavior. When the worms were kept at room temperature, the gene was silent and the worms remained colorless. But when placed at 33°C for an hour, the worms turned green, indicating that the gene had become active. As E5 aged, the embryos acquired a yellowish-orange hue, and 50 hours after the heat shock, they were mostly red.

    To see whether they could track changes in gene expression during embryonic development, the Stanford workers also linked E5 to a regulatory sequence from Otx-2, a gene needed for normal formation of the nervous system. They injected the gene into embryos of the frog Xenopus laevis and then examined the brains of the resulting tadpoles. As predicted by previous studies of Otx-2 expression, they could see that the gene was expressed in some areas of the brain before others. Those where the gene came on earlier and then turned off appeared orange, while others were greenish, indicating the gene's recent expression.

    Although enthusiastic about the results, Tsien cautions that the E5 mutation likely hasn't solved all of drFP583's problems. Like drFP583, E5 probably aggregates to form tetramers. That could cause problems if researchers try using E5 to track proteins by creating hybrids, because they could clump as well. In addition, E5 requires oxygen to change colors, and it's currently unclear how variations in oxygen concentrations in different biological systems will affect the rate of color change. But Tsien and others suspect it will be possible to engineer new proteins that don't aggregate and, if necessary, are less dependent on oxygen.

    Even before that happens, researchers are eager to test the new timer protein. Leonard Zon of Children's Hospital and Harvard Medical School in Boston is using it to follow the development of blood cells in zebrafish. And many others are lining up after him. “The people [who] would like to use it are more than I can actually handle,” says Terskikh, who is already involved in several collaborations.

  5. NEUROSCIENCE

    Stem Cells Hear Call of Injured Tissue

    1. Laura Helmuth

    NEW ORLEANS—Like a superhero who can hear cries for help from miles away, ever versatile stem cells somehow sense danger in the brain and spinal cord and rush to the rescue. In animal models, at least, injected stem cells travel to tissue injured by stroke, Alzheimer's-like plaques, contusions, or spinal cord bruises, sometimes traversing long distances. Several teams reported these surprising results this month at the Society for Neuroscience annual meeting. No one knows exactly how stem cells detect these different kinds of damage, but researchers hope that the cells' migratory powers can be harnessed to either replace dead tissue or deliver therapeutics right where they're needed.

    Researchers have known for years that stem cells migrate widely if added to young brains but are fairly dormant in healthy adult brains. Now studies are showing that injuries somehow prompt stem cell movement even in adult brains. “In the abnormal brain, there are new rules. The whole terrain is changed,” says neuroscientist Evan Snyder of Harvard Medical School in Boston. And this newfound motility is being put to use: “A number of us find that we can pull cells out from the nervous system, grow them in a dish, and put them back,” where they migrate to damaged tissue and sometimes repair it, Snyder says.

    The research teams are using a variety of sources for their stem cells. Some come from established cell lines that originated either from human or mouse cells. Other researchers pluck neural precursor cells from where they occur naturally in the primate brain, in a layer of tissue that surrounds the ventricles. Others implant stem cells from embryonic tissue. Regardless of their origin, the cells share one trait: They still have developmental decisions to make. And under the right circumstances, they can be coaxed to turn into neurons or other brain cells called glial cells.

    In many disorders that strike the motor system, replacing glial cells might be a better strategy than building new neurons, says neuroscientist Jeffery Kocsis of Yale University. Most spinal cord injuries, for instance, don't completely cut the axons that run through the cord. But bruising or crushing the cord kills the tissue, called myelin, that insulates the axons, leaving axons exposed and unable to conduct signals. To see whether stem cells could act as glial cells and build up myelin around bare axons, Kocsis and his team made small, demyelinating lesions in the spinal cords of monkeys. Using punch biopsy, they then plucked neural precursor cells from the rim of the injured monkeys' ventricles, multiplied them, and injected the cells back near the injury. The cells appeared to seek out damaged axons and rewrap them with myelin. Kocsis's team is now testing whether the newly insulated neurons conduct nerve impulses better than untreated ones. “The feasibility is there” for remyelinating exposed axons, Kocsis says, a strategy that could also be useful in treating multiple sclerosis.

    To the rescue.

    Stem cells (red) infiltrate a brain tumor (green).

    CREDIT: EVAN SNYDER/HARVARD UNIVERSITY

    In another model of motor system damage, Jeffrey Rothstein of Johns Hopkins University and colleagues found that stem cells can migrate along the entire length of the spinal cord, at least in mice and rats. The team infected the animals with a virus that causes the same sort of neuronal damage as amyotrophic lateral sclerosis does in humans. The virus kills neurons at the base of the spinal cord, paralyzing the animal. When the Hopkins team injected mouse neural stem cells into the cerebrospinal fluid, the cells migrated from the top of the spinal cord to the base and clung to injured areas. Untreated control animals remained fully paralyzed. But 8 weeks after receiving the stem cells, half of the treated rats could move their limbs somewhat. Human-derived stem cells didn't work reliably, Rothstein says; they're not sure why.

    Stem cells can also find their way to damaged spots in the brain. To model Alzheimer's disease, researchers led by Barbara Tate of Children's Hospital in Boston injected amyloid, a protein that accumulates into plaques characteristic of Alzheimer's, into one side of rats' brains. In control rats, they injected benign proteins. The researchers then injected stem cells into a ventricle on the opposite side of the brain. The cells crossed to the opposite hemisphere and found their way to the amyloid deposits but ignored the control protein.

    Stem cells don't always make the cross-hemisphere trek successfully. In a model of traumatic brain injury in mice, stem cells injected near the injury migrated to it, as well as to diffuse white-matter damage throughout the injured hemisphere, reported Tracy McIntosh of the University of Pennsylvania in Philadelphia. There, they apparently repaired at least some of the damage: 12 weeks later, treated mice were walking relatively gracefully across a moving cylinder while the others stumbled about. But when the team injected stem cells into the opposite hemisphere, the cells did not find the injury or help recovery.

    Looking at a different axis, Snyder's team has found that stem cells do move reliably from the back of the brain to the front. They injected stem cells into the rear brains of adult rats with induced strokes in the forebrains. The cells found their target, coating the rim of the stroke lesion and restoring some movement. But many questions remain, says Snyder. They don't know whether the stem cells replace dead cells near the lesion, improve connections among remaining cells, or perform some other function.

    Stem cells might also be enlisted to deliver therapeutics to the right spot, according to a report in the 7 November Proceedings of the National Academy of Sciences. Brain tumors send tentacles throughout large areas of tissue, making tumors tough to eradicate. Karen Aboody of Children's Hospital in Boston, working with Snyder and colleagues, inserted into stem cells a gene for a molecule that shrinks tumors. They then injected the cells into several sites in rat brains. The stem cells surrounded the tumors and “chased down” the cancer cells the tumor spins off, they report, thereby shrinking the animal's tumor burden.

    These studies, especially Aboody's, “show in a clinical context that we might be able to use this ability” of stem cells to migrate, says Ron McKay of the National Institute of Neurological Disorders and Stroke. The next trick will be figuring out how stem cells know where to go. Damaged tissue must send out long-range signals, McKay says, and researchers are already on the lookout for them.

  6. MATERIALS SCIENCE

    Armenia Wants Second Mideast Synchrotron

    1. Robert Koenig

    BERN, SWITZERLAND—You might call it a case of open SESAME: Middle Eastern scientists who for years have been yearning for a synchrotron may wind up with two.

    SESAME (Synchrotron Radiation for Experimental Science and Applications in the Middle East) is an 11-nation consortium formed to install and operate a first-generation synchrotron now mothballed in Germany (Science, 25 June 1999, p. 2077). Last spring, SESAME selected Jordan as the site of the 0.8 giga-electron-volt (GeV) BESSY-I synchrotron, disappointing Armenian officials who had hoped to snare the prize. But their dejection didn't last long. This month Armenia moved to the head of the line for a second, brand-new synchrotron after securing a $15 million down payment from the U.S. Congress as part of a foreign-aid spending bill that would funnel $90 million to the country.

    Armenia's latest strategy came as a surprise to SESAME, says Herwig Schopper, the former director-general of CERN, the European particle physics laboratory near Geneva, who heads SESAME's interim council, which met earlier this month in Yerevan, Armenia. The country's representatives say they hope to raise another $15 million from wealthy Armenian-Americans, as well as support from the Armenian government to operate the machine.

    Schopper says the Armenian plan is feasible as long as enough money and users can be found for the synchrotrons, both of which would be managed by SESAME. Researchers may be attracted by its power (2 to 3 GeV), which would put it in a class with the new ANKA synchrotron in Karlsruhe, Germany. That's more powerful than Germany's BESSY instrument even after a planned upgrade (to fewer than 2 GeV). The Armenian instrument also might be more convenient for scientists in nations such as Pakistan or Iran.

    SESAME has asked its oversight panels to canvass for users and provide advice for the Armenian proposal. The previously approved Jordan project, meanwhile, still needs to raise about $6 million to upgrade BESSY and $8 million or more to equip a site near Amman. Schopper expects key decisions on both projects to be made at the council's next meeting, scheduled for March in Cairo. “If we're fortunate, we'll get two instruments,” says Schopper. “But we're confident we'll get at least one.”

  7. PHILANTHROPY

    Moore Foundation Targets Science

    1. David Malakoff

    Moore's Law now applies to philanthropy as well as computing power. Last week, computer industry titan Gordon Moore and his wife, Betty, announced that they are creating a $5-billion-plus foundation to support scientific research, conservation, and higher education. Once fully funded within a few years, the foundation is expected to rank among the dozen largest charities based in the United States.

    Computer engineer Moore, the 70-year-old co-founder of processing-chip giant Intel Corp., is widely known for his observation that innovations were doubling computer processing power every 18 to 24 months. Moore's Law became a buzzword in the booming computer industry and helped to boost Moore's net worth to nearly $15 billion, mostly in Intel shares.

    Now, the Moores want to share their new-economy wealth with researchers, university educators, and environmentalists. “Gordon is fairly passionate about looking for higher risk [research] projects that would not normally be funded by the National Science Foundation or the National Institutes of Health,” says Lewis Coleman, who will become president of the San Francisco, California-based foundation early next year. He is currently chair of Banc of America Investment Services Inc.

    The charity's agenda will be shaped over the next year as Coleman hires dozens of staff members, recruits board members, and picks the brains of advisers. “We've just started to seek advice from the scientific community,” he says. But basic and applied environmental studies and the physical sciences are among the “underserved” areas likely to benefit. Science “that would have an impact on protecting the environment [will be] an important, but not exclusive,” focus of the foundation, he adds.

    The Moores have already made an international mark in science philanthropy. They have given millions to the California Institute of Technology in Pasadena, California, $35 million for biodiversity protection to Conservation International, and $17 million to Cambridge University in the United Kingdom for a state-of-the-art physical sciences library and have built laboratories at Stanford University and the University of California (UC), Berkeley. Moore has also supported SETI, the Search for Extraterrestrial Intelligence, and a UC field research station on Moorea, a Pacific island next to Tahiti.

    “Gordon Moore has impeccable taste and judgment when it comes to donating money—he could set the pace for other funders in a number of areas,” says one former university administrator who has worked on securing gifts from the executive. “But it may be harder than he expects to find the niches where the foundation can make a clear-cut difference.”

    Still, Coleman is confident that the new Moore Foundation will have an impact. “We are open to joint ventures,” he says, “and intrigued by [their] ability to bulk up and have influence on specific issues.”

  8. INFECTIOUS DISEASE

    The Enigma of West Nile

    1. Martin Enserink

    A year after it invaded New York City, there's still much that scientists, don't know about the West Nile virus. But they're sure it will keep spreading.

    When the West Nile virus gained a foothold in New York City last summer, it found a land of endless opportunities. It had its pick of dozens of bird species that had never been in contact with the virus before; who knew how many might be hosts in which the virus could live? There are more than 100 mosquito species in the United States—how many could transmit the virus from one bird to another? How fast would the newcomer colonize the entire continent?

    Fifteen months after the 1999 West Nile outbreak, which sickened 62 mostly elderly people and killed seven, scientists now have some answers. They are not reassuring. This summer, the human toll has been relatively mild, with just 18 cases and one death. But the virus has been found in more than 60 bird species and about a dozen mammals; in a little more than a year, it has spread to 11 states along the East Coast and the District of Columbia. “There's a good chance it will reach the West Coast within 5 years,” predicts vertebrate ecologist Nicholas Komar of the Centers for Disease Control and Prevention (CDC) in Fort Collins, Colorado. And with no natural barriers to stop it, he adds, it's just a matter of time before citizens of Buenos Aires should start worrying.

    Yet researchers are still hard pressed to predict how abundant the virus will eventually become or how serious a public health threat it will pose. Like St. Louis encephalitis, which occasionally flares up in the southern United States, West Nile virus is primarily a bird virus that is spread by mosquitoes. Humans, as well as horses and several other mammals, are “dead-end hosts.” They become infected when a mosquito bites an infected bird and then a human. But the disease stops there; unlike malaria, say, mosquitoes don't transmit the West Nile virus from person to person. That puts a natural limit on the human epidemic; but the complex dynamics of birds, mosquitoes, and humans also cause erratic outbreak patterns. “People have studied St. Louis encephalitis for 50 years, and outbreaks are still unpredictable,” says Lyle Petersen, a CDC physician who studies the West Nile virus. “They just sort of happen.”

    North or south?

    After its surprise debut last year, some researchers said there was a 50-50 chance that West Nile would never make it in its new home. Except for one infected crow found in Baltimore, the virus seemed confined to an 80-kilometer radius around New York City, and as temperatures dropped and mosquito populations dwindled, they said, it might well die out. Alternately, some researchers predicted that migrating birds would carry the virus south as they escaped the New York winter; it would likely show up in Florida or the Caribbean in spring. Neither scenario proved right. This winter, after searching for mosquitoes in underground sewers and abandoned buildings in New York, CDC researchers discovered a few overwintering mosquitoes, and one sample was infected with the virus, dashing any hopes that the virus would simply disappear. Nor did the southward migration occur—at least not initially. The virus has mostly traveled north this past summer. From July on, increasing numbers of infected birds were found in upstate New York, Connecticut, Rhode Island, Massachusetts, Vermont, and New Hampshire. “That took most people by surprise,” says Linda Glaser, a researcher with the U.S. Geological Survey's National Wildlife Health Center in Madison, Wisconsin. Only late this summer did the virus head south. In early October, a dead crow in Virginia tested positive; so did one from Chatham County in North Carolina a few weeks later. Chatham, some 800 kilometers from New York City, is now the southernmost point where the virus has been found.

    Ideal hosts

    As some researchers tracked the virus in the field, others have been studying its behavior in the lab, trying to determine how it spreads so rapidly. Crows are the virus's most conspicuous hosts because they have been dying en masse. But that doesn't mean they're the most helpful to the virus, says Komar. An ideal host would let the virus replicate for a long time but stay healthy enough to be fed on by mosquitoes. When the host dies quickly, as do crows, the virus goes with it.

    Dozens of other bird species tested positive last year for West Nile antibodies, and Komar has studied how well the virus replicates in seven of them. Blue jays were “off the charts,” says Komar; they had over a trillion viral particles per milliliter of blood when the infection peaked—similar to what has been found in crows. The humble house sparrow also came in high, although its viral load was a 1000 times lower than the blue jay's. But because house sparrows are so ubiquitous, they may be the virus's prime replication machine, suspects Komar.

    Meanwhile, a team led by Michael Turell of the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) in Fort Detrick, Maryland, has been trying to identify the virus's insect accomplices. The team collected two dozen species of mosquitoes in New York City, Massachusetts, and Virginia and then turned them loose on infected chickens. Two weeks later, they tested whether the mosquitoes themselves had become infected and whether they could transmit the infection to other chickens. Three Aedes species turned out to be highly susceptible to West Nile infection, and several others —including Culex pipiens, which was first implicated in the outbreak—were moderately efficient vectors.

    But lab tests aren't the final word, says USAMRIID's Monica O'Guinn. Each species' role in spreading disease also depends on such factors as population density and feeding habits, about which relatively little is known; these in turn may depend on geography or weather. Some mosquito species are most likely to fuel the virus's avian life cycle, as they primarily feed on birds; others bite both birds and humans and might serve as the “bridge species” that makes people sick.

    No early warning

    With so many variables in play, the dynamics of the outbreak have been difficult to understand, let alone predict. It's “somewhat of an enigma,” for instance, why the virus made such impressive strides across the U.S. map this summer yet sickened only 18 people, Petersen says. He speculates that certain conditions might have favored an explosion in bird transmission but not human transmission. Some attribute at least part of the low case rate to intense spraying of insecticides. Increased public awareness may also have played a role. In New York City, for instance, the public was bombarded with the message that elderly people, especially, should protect themselves from mosquito bites, says assistant commissioner Marcelle Layton of the city's department of health.

    Another riddle of the 2000 outbreak is why an early warning system for viral activity was such a fiasco. In several states, researchers bled so-called sentinel chickens weekly and tested them for West Nile antibodies. For other insect-borne viral diseases, such as St. Louis encephalitis, chickens become infected before people do and are a time-proven indicator. But very few chickens became seropositive for West Nile virus this summer, and not one did so before the first human cases.

    Dead birds, on the other hand, are too sensitive an indicator; they have been found in many states where there has not been a single human case, and spraying insecticides would have been premature. CDC researchers are now focusing their efforts on developing surveillance indicators that are a better predictor of human cases, says CDC's Petersen, so that public health authorities can take precautions once the risk becomes substantial. For instance, researchers are now combing through the data gathered this year in Staten Island, the New York borough where there was a cluster of cases. The hope, says Petersen, is that these will reveal a pattern that could have foreboded the epidemic. Researchers found high numbers of infected Culex salinarius mosquitoes in the area; they may prove to be significant. “But in all likelihood, it may be a couple of years before we can adequately predict the risk,” he says.

    For that reason alone, Petersen won't speculate where the disease will next crop up—but he knows it will. And, because no drugs or a vaccine exists (see sidebar on p. 1483), West Nile is bound to claim new human victims. “We know it's going to be more than just a couple, and we know it's not going to be hundreds of thousands,” says Peterson. “But we can't say much more with any certainty.”

  9. INFECTIOUS DISEASE

    West Nile Drugs, Vaccine Still Years Away

    1. Martin Enserink

    Most people infected with the West Nile virus don't even know it, or they experience only mild, flu-like symptoms. But for 1% or 2% it can be a terrifying, life-threatening experience. For these patients, there are no effective drugs; all doctors can do is treat symptoms as best as they can, while patients fight the brain infection themselves. Many of these patients end up with lingering neurological damage, as often occurs with encephalitic infections, says Marcelle Layton, assistant commissioner for health in New York City. Layton is following 42 survivors of the 1999 outbreak around New York, which killed seven people. Those over 65—a majority of all patients—are especially vulnerable: 3 months after the outbreak, 70% still reported muscle weakness, 75% suffered from memory loss, and 60% from confusion. Although most were previously healthy and active, more than half can no longer live at home. “They've lost their independence,” says Layton.

    With support from the National Institutes of Health (NIH), several research groups are taking the first steps toward developing drugs to fight the acute phase of the disease. Last month, for instance, a team led by Ian Lipkin from the University of California, Irvine, published a paper in the Journal of Infectious Diseases showing that the drug ribavirin—already in use to treat hepatitis C and several other viral diseases—could halt replication of the West Nile virus in cell culture. Lipkin's team now wants to test the compound in infected mice. But other researchers have their doubts; they point out that Lipkin's team used high doses that could prove toxic and, perhaps more important, ribavirin doesn't cross the blood-brain barrier well, so it wouldn't get to where it's most needed.

    Ribavirin was tested in 35 West Nile patients in Israel this summer, says Silvio Pitlik of the Rabin Medical Center in Petach Tikvah. Although the results are difficult to interpret because there was no control group, “ribavirin didn't seem to affect the outcome,” says Pitlik.

    Other candidates are in the wings. At Utah State University in Logan, a group led by virologist Robert Sidwell is screening potential West Nile drugs sent in by a variety of research institutes and companies. Out of 41 compounds tested so far, five have shown strong antiviral activity and didn't seem toxic in cell culture, says team member John Morrey. Like ribavirin, several are already on the market for other diseases, which means they could win approval for use in West Nile patients a little faster than a new compound could. But with so few patients worldwide, West Nile drugs won't exactly be blockbusters, giving companies little incentive to develop them. “I would severely question whether any company would put the money in,” says Lyle Petersen, a physician at the Centers for Disease Control and Prevention in Fort Collins, Colorado.

    Meanwhile, NIH is also supporting the fast-track development of a West Nile vaccine by OraVax, a biotech company in Cambridge, Massachusetts (owned by Peptide Therapeutics in the United Kingdom). OraVax has a patented strategy for developing vaccines against flaviviruses, the family to which the West Nile virus belongs. The technique, developed by Thomas Chambers of St. Louis University, uses as the vaccine's backbone a weakened yellow fever strain called 17D, which has been used widely as a yellow fever vaccine since the 1930s. OraVax replaces the gene that encodes the yellow fever virus's coat, or envelope, protein with the corresponding gene from the West Nile virus. OraVax's Thomas Monath predicts that the company could start human trials with the West Nile vaccine in just 2 years.

    NIH has pledged $3 million to carry the vaccine through phase I clinical trials; after that, the company will have to go it alone if, indeed, the vaccine looks promising and the company decides there is a market. It wouldn't make sense to vaccinate the entire population for such a rare disease; rather, elderly people in high-risk areas would be the most obvious candidates. With West Nile now striking the well-educated, relatively affluent urban centers of the East Coast, there may be just enough public demand to make the vaccine viable, says Monath, “but much will depend on how the disease evolves over the next two summers.”

  10. IRANIAN SCIENCE

    Iran's Scientists Cautiously Reach Out to the World

    1. Robert Koenig

    Two decades after Iran's Islamic revolution, a reformist president wants researchers to expand collaborations with colleagues abroad

    TEHRAN—When Reza Mansouri left a cushy university post in Vienna to return to his native Iran in 1980, the young theoretical physicist found himself caught up in a whirlwind. In the wake of the revolution that overthrew the Shah the year before, the new “Supreme Leader”—the Ayatollah Ruhollah Khomeini—was reinventing Iran as an Islamic republic, and talented researchers were fleeing in droves to safer and more lucrative posts abroad. Mansouri was going against the flow—an opponent of the Shah, he says he wanted “to do something for my country. Everything was in turmoil then, and science was low on the list of priorities.” Along with his colleagues, Mansouri endured Iraqi rocket attacks against Sharif University of Technology, where he is a physics professor, during Iran's bloody 8-year war with Iraq in the 1980s. “Back then, people talked about ‘Islamic physics’ and ‘Islamic science.’ It took a long time for them to understand that physics is physics and science is science. But I think we're on the right track now.”

    Restoring an Islamic tradition?

    Science Minister Moin and President Khatami intend to pour more resources into research. Reza Mansouri (bottom), who returned to his homeland 20 years ago, thinks that Iran's science is finally back on track.

    CREDITS: R. KOENIG

    Today, after what one Iranian chemist describes as “20 years of frustrating trial and error,” science in this politically isolated but oil-rich nation may be on the verge of resurgence. The nation's reform-minded president, Mohammad Khatami, and his allies are promising more money for R&D and are reorganizing universities to beef up graduate education and research. They are also cracking open the door to closer cooperation with scientists abroad—including those in the United States, the country Khomeini branded “the Great Satan” in 1979 when revolutionaries occupied the U.S. Embassy here, holding 52 diplomats hostage for more than a year.

    Such reforms may seem basic, but Iran's government is walking a tightrope. It's pressured by hard-liners on the right who oppose reforms and by liberal university students—many of whom now glimpse the outside world via the Internet and satellite TV receivers—eager for change. Indeed, even as the hard-liners celebrated the anniversary of the embassy takeover on 3 November with their annual anti-American, flag-burning festival in the streets of Tehran, moderates were quietly working to bridge a gulf of mistrust and heal old wounds. In the first high-level visit in more than 2 decades, a U.S. delegation led by National Academy of Sciences (NAS) president, Bruce Alberts, visited Iran in September, issuing a joint statement with Iran's academy afterward, pledging to initiate six joint workshops over the next 2 years. Although that NAS visit was carefully kept out of the Iranian news media and was not sponsored by either government, Sharif University Professor Abolhassan Vafai—one of the Iranian organizers—told Science that it represented “a breakthrough.”

    But serious obstacles must be overcome if this budding relationship is to blossom. These range from logistical hurdles—such as the lack of diplomatic relations between the two countries, making it difficult for researchers in either nation to obtain visas—to lingering disputes, including U.S. State Department allegations that Iran is developing weapons of mass destruction. The latter led to a U.S. ban on the sale of many types of scientific equipment to Iran. And looming in the background—like the five-story-high murals of Khomeini all over Tehran—is the ever-present threat of another crackdown by Iran's powerful Islamic conservatives.

    Despite those barriers, interviews in Tehran, Shiraz, and Isfahan with three dozen Iranian scientists and university officials—many of whom earned their Ph.D.s in the United States—revealed that researchers here are eager to join the world's scientific community, even if they risk retaliation by hard-liners. “Our scientists can't work in a vacuum,” says Habib Firouzabadi, president of Iran's Chemical Society and a professor at Shiraz University. Although much of Iran remains a scientific backwater today, Mansouri, who heads Iran's Physical Society, predicts that with better funding and international connections Iranian science will rise to world-class levels in fields such as math and physics within a half-century.

    Deep roots

    Ten centuries ago, scientists in Persia, the ancient lands that later became Iran, were light-years ahead of the rest of the world. During that Golden Age of Islam, Persian mathematicians helped complete the system of decimal fractions and made advances in algebra and trigonometry, while chemists pioneered uses of alcohol and sulfuric acid.

    Although Persian mathematicians were renowned for centuries, scientists in Western Europe eventually eclipsed their Islamic counterparts. And it wasn't until the second half of the 1900s that Iran's secular government built top-notch technical universities and institutes, including Tehran's Aryamehr (now called Sharif) University of Technology—modeled in part on the Massachusetts Institute of Technology—and Shiraz University, which cultivated close ties with the University of Pennsylvania. Iranian colleges sent their brightest graduate students abroad for study.

    After the 1979 revolution, Sharif, Shiraz, and other universities severed ties with the West, and many top professors fled the country. One informal study found that there are now nearly twice as many Ph.D.-level Iranian-born physicists in the United States as in Iran. Although Islamic revolutionary leaders backed higher education, their emphasis was on mass education rather than “elitist” research. Science suffered most during the strife-torn 1980s, when Iran's leaders were preoccupied with restructuring the country and the war with Iraq sapped resources.

    During the 1990s, Iran devoted more attention to higher education; a newly discovered oil field and the hike in world oil prices are now pumping up resources for universities and research. Today, Iran has nearly 100 government-run higher education institutions—more than four times the number of state colleges in 1978, with nearly an eightfold increase in students. Doctoral programs in natural sciences—rare before 1979—have taken root in many parts of the country. At Shiraz University, for example, Chancellor Mahmood Mostafavi says there are now about 300 Ph.D. students, versus just one in 1983.

    Despite the burgeoning numbers, many Iranian scientists complain that the quality of education and research suffers because few professors have adequate time or funding. “Universities have grown like mushrooms, but they haven't redefined their goals,” which should include “a major increase in resources for research,” says Firouzabadi.

    Another serious problem is the brain drain, which has spurred the government to find ways to keep top scholars at home. One lure is the Institute for Advanced Studies in Basic Sciences, founded in 1993 in the northern city of Zanjan by one of Iran's top scientists—astrophysicist Youssef Sobouti, known for star-cluster research. Thanks largely to his center and the 12-year-old Institute for Physics and Mathematics in Tehran, Sobouti contends that “physics research, judged by publications in international journals, has improved markedly in Iran over the last 10 years.” He encourages his students to make use of the Internet, foreign journals, and overseas contacts. But not every Iranian researcher has good opportunities to travel abroad or access to scientific literature. The most recent issue of Science available at Shiraz's regional science library is 6 years old.

    Outside scientists tend to agree that theoretical physics and mathematics—neither of which requires much instrumentation—are Iran's healthiest disciplines. And although biology lags behind other fields, Iran gets good marks in traditional branches of chemistry. “The quality of the science I saw was quite high, and their lab equipment was better than I had expected,” says Alberts. Echoing that view is Nobel laureate F. Sherwood Rowland, who visited Iran with the NAS group. “A few top universities and medical research units have first-class personnel, with equipment that's a little behind the curve but still competitive,” he says. But Alberts and Rowland concede that they may have been shown only the best of what Iran has to offer.

    Eroding the barriers

    Iranian officials say they're committed to nurturing the best scientific centers and shoring up weak disciplines. “We want to enhance the quality of science here, increase access to the Internet, and bolster the links between universities and research centers,” Iran Science Minister Mostafa Moin told Science. He says he expects to get approval for “new roles and missions” for his ministry by March, including directing a reorganization of the national research labs and the university system to promote graduate studies. Moin also notes that the government has pledged to boost R&D spending from 0.44% of the gross domestic product—about $350 million in 2000, including $25 million for basic research—to a more robust 1% over the next 5 years, with industry expected to contribute another 0.5% of GDP to research.

    But the government has fallen short on similar promises in the past, skeptics say. “The pattern is that the government and parliament approve more funds for science, but the administration doesn't get that money to researchers,” says Mansouri, who heads the National Research Council's basic science committee.

    Meanwhile, Moin and President Khatami are encouraging Iranian scientists to strengthen ties with foreign researchers. In a speech at a meeting of the Third World Academy of Sciences here in October, Khatami said that science transcends politics and that scientific dialog could help bridge the gap between Iran and “those countries with which our relations are not totally amicable.” Besides hosting the Third World Academy conference, Iran has reached out to two international efforts: It has agreed to take part in the Large Hadron Collider at CERN, the European particle physics laboratory near Geneva, and plans to contribute to SESAME, a synchrotron that will be moved from Germany to Jordan as soon as funds are raised.

    A grander challenge is to build research capacity within Iran. Scientists here complain that U.S. rules barring the export of technology to Iran make it extremely difficult to buy U.S.-made instruments or supplies. “Even the simplest scientific equipment for education seems to be considered ‘sensitive,’” says astrophysicist Ahmad Kiasatpour of Isfahan University.

    That ban continues, but the State Department has eased some restrictions on travel to Iran by U.S. researchers, and the two nations appear to be using scientists to test the waters of rapprochement. A breakthrough came in December 1998, when a U.S. delegation got permission to attend a conference in Tehran on nonrenewable energy sources. Then, after 8 months of negotiations between the Iranian and U.S. science academies—mediated by Jeremy Stone, then president of the Federation of American Scientists, and Iranian-born chemical engineer G. Ali Mansoori of the University of Illinois, Chicago—Iran in September 1999 sent a delegation to visit the NAS and scientific societies in Washington, D.C. This year's visit by the U.S. delegation was the latest in this tentative pas de deux.

    Diplomatic restrictions, however, sharply limit informal research cooperation. Some U.S. scientists complain that Iran is reluctant to grant them visas; for that reason, the vast majority of scientific visits are made by Iranian-born Americans who use their Iranian passports to gain entry. Meanwhile, Iranians complain bitterly about being fingerprinted, photographed, and questioned each time they enter the United States to attend conferences. “They treat you as if you were a criminal, not a scientist,” says physicist Mehdi Golshani.

    Better late than never

    Reza Malekzadeh is forging ties with the West to understand Iran's esophageal cancer problem.

    CREDIT: R. KOENIG

    Behind those restrictions lie U.S. suspicions that Iran is channeling resources into weapons programs. “Iran's pursuit of weapons of mass destruction and ballistic missile delivery systems continues unabated and has even accelerated in the last few years,” contends Robert J. Einhorn, assistant U.S. secretary of state for nonproliferation. In testimony last month before a Senate panel, Einhorn aired concerns over a Russian firm's construction of a 1000-megawatt nuclear reactor in Bushehr, which will be Iran's first nuclear power station. The project, he stated, could “be used by Iran as a cover for maintaining wide-ranging contacts with Russian nuclear entities,” which might prove useful for developing nuclear weapons.

    Iranian officials say the Bushehr reactor is merely a power plant and deny that the country maintains efforts to build chemical, biological, or nuclear weapons—pointing out that Iran has signed treaties against proliferation. And although the Shahab missile program is an open secret, Iranians claim it is a defensive effort necessitated mainly by the threat from neighboring Iraq, which rained SCUDs on Iranian cities during the bloody 8-year war.

    A long road ahead

    Although science is a higher priority in today's Iran, not everyone is convinced that reforms will take root quickly—especially if the nation's hard-liners, led by Ayatollah Ali Khamenei, continue to wield ultimate power. Although the Khatami administration and parliament have become more liberal, hard-liners still control the Guardian Council, which vets proposed laws. The government has suppressed student dissent and shut down liberal newspapers. Meanwhile, a fundamentalist terrorist group—the Mujahedeen Khalq—has unleashed scattered attacks, prompting the government to assign revolutionary guards toting submachine guns to escort Third World Academy scientists on visits last month to universities and archaeological sites.

    In spite of these internal tensions, Iranian scientists see growing opportunities to break out of years of isolation. Take the effort to explain northern Iran's “esophageal cancer belt,” where such cancer is 10 times more common than elsewhere. Last year, Reza Malekzadeh, who heads Tehran University's new Digestive Disease Research Center, visited scientists at the U.S. National Cancer Institute (NCI) to revive interest in cooperative research, and the NCI reciprocated recently by sending a staffer on an exploratory visit to Iran.

    Early next year, one of Malekzadeh's young researchers will be a visiting fellow at NCI, learning techniques for tracking genetic links to cancer. Meanwhile, Malekzadeh and other members of Iran's medical academy have invited one of the top experts on Iran's cancer belt—University of Montreal researcher Parviz Ghadirian—to visit his homeland for the first time in 19 years to discuss collaborating.

    “By working together, we might get to the bottom of this,” says Malekzadeh. But, he concedes, precious years have been lost: “We should have done this long ago.”

  11. IRANIAN SCIENCE

    Iranian Women Hear the Call of Science

    1. Robert Koenig

    SHIRAZ—In the cavernous main hall of Shiraz University's otherwise modern library, arrows pointing WOMEN and MEN in opposite directions aren't for the toilets. They direct students to the main reading rooms—which are separate, but equal.

    Some witty female scholars joke that the WOMEN sign should be pointing upward, not sideways. For women are a growing presence at Iran's universities: From Shiraz in the south to Tehran in the north, they now make up nearly 60% of incoming classes, almost double their share in 1978. But as in many other countries, some female academics in Iran point to a glass ceiling that keeps their representation among university professors low.

    Separate, but closing the gap.

    Women are making gains in the academic community, says Faranak Seifolddini, despite institutions such as reading rooms for each sex.

    CREDITS: R. KOENIG

    Still, the rise in the number of female students in recent years belies the outsider's impression that Iranian women—who are required to wear chadors in public and are barred from becoming Islamic clerics—are bit players in a male-dominated society. “It was a real culture shock when I returned here 6 years ago,” admits Faranak Seifolddini, who got her Ph.D. in city planning from the University of Wisconsin and is now an assistant professor at Sharif University of Technology in Tehran. “But things are changing, and opportunities are opening up.”

    The situation is a far cry from that in neighboring Afghanistan, where the ruling Talaban, Islamic fundamentalists, has barred women from working or attending universities. In contrast, Iranian women—who played minor roles in science for decades—are now coming into their own. “Nearly two-thirds of the new students in my department are young women,” says chemist Fakhry Seyedeyn-Azad, an assistant professor at the University of Isfahan. Some observers say Iranian women are making gains because they tend to be more dedicated than young men are. “Young women have fewer options than men, so they are devoting more time to their studies,” says psychologist Shahireh Bahri, deputy director of Iran's Office of International Scientific Cooperation.

    That view is echoed by the government. “If women are given equal opportunities, they tend to do better than men, because they are more diligent,” says Research Minister Mostafa Moin. And they have good role models too: The daughter of Iran's president, Mohammad Khatami, is a math major at Sharif University.

    But there are indications that women are having trouble rising to the academic world's higher echelons. According to the research ministry, only about 6% of full professors, 8% of associate professors, and 14% of assistant professors were women in the 1998–99 academic year. However, women accounted for 56% of all students in the natural sciences last year, including one in five Ph.D. students. At Shiraz University, only about 30 members of the 500-strong faculty are women, officials say. Although promotion is based on a point system, one assistant professor contends that “it is more difficult for women to get those promotions.” Adds Seifolddini: “When it comes to advancement at the higher levels, it's still better to be a man. But that may be changing.”

    Others scoff at the notion of a glass ceiling. “In my department, half of the 12 Ph.D. students are women,” says analytical chemist Nahid Pourreza, who became chair of the chemistry department at Shahid Chamran University in southwestern Iran 9 years ago. Seyedeyn-Azad, too, says she hasn't encountered such problems: “I don't think there are barriers to my career because I am a woman.”

  12. IRANIAN SCIENCE

    Earthquake Researchers Prepare for the Next Big One

    1. Robert Koenig

    TEHRAN—When a devastating earthquake rocked the Manjil region in Iran on 20 June 1990, the tremors also shook seismology research in that country. That night, Mohsen Ghafory-Ashtiany rolled out of his bed in Tehran and headed to the danger zone in northwestern Iran, where 13,000 persons perished and 60,000 were injured. He and his then-tiny staff surveyed the damage and mapped out a plan for bolstering earthquake research and preparedness. “The Manjil quake marked a turning point,” he says.

    Antiearthquake epicenter.

    Mohsen Ghafory-Ashtiany with map of Iran's seismic hazards behind him.

    CREDIT: R. KOENIG

    Ghafory-Ashtiany was then fewer than 6 months into the process of building a new seismology institute from scratch; the Manjil quake shook loose far more funding. Now Ghafory-Ashtiany directs a 190-person staff—including 59 scientists—at the new, quake-proof headquarters of the International Institute of Earthquake Engineering and Seismology (IIEES), which boasts advanced soil analysis and structural engineering labs, as well as a computer center that displays real-time data transmitted via satellite from seismographs scattered across Iran. IIEES is also at the epicenter of an expanding “earthquake-awareness” campaign that includes nationwide TV programs and annual quake drills at every school.

    There are good reasons for the focus on seismology research and education: The Iranian plateau is one of the world's most seismically active regions, and many cities—including Tehran—lie in high-risk earthquake zones. Last century, 20 major earthquakes claimed more than 100,000 lives in Iran and severely damaged hundreds of towns. Before the Manjil quake, however, there were only two Iranian labs devoted to earthquake-related research and fewer than 40 university faculty members in seismology and earthquake engineering. Today there are seven such labs and 260 scientists and engineers. During the 1990s, the number of seismic recording stations in Iran nearly tripled to 43, and the number of “strong motion” stations, designed to measure how quakes affect ground movement, rose from 270 to about 1000.

    The IIEES, established with seed money from the United Nations Educational, Scientific and Cultural Organization (UNESCO), but now fully financed by Iran's science ministry, has a track record in international cooperation, having embarked on research efforts with geophysicists in France, Japan, Russia, Italy, and Switzerland. Most ambitious is the Iranian-French connection, which encompasses projects involving 15 scientists at several French geophysics laboratories. The main focus is to study how Iran's Zagros fold thrust belt deforms, increasing the earthquake risk. To that end, researchers have installed 25 Global Positioning System receivers in and around Iran to measure the movements of mountain ranges, and they plan to build 75 seismological stations along a 750-kilometer-long stretch of the Zagros Mountains.

    A number of U.S. geophysicists, anxious to tap into Iran's seismology data, have approached the IIEES about possible cooperative projects. Ghafory-Ashtiany says no agreements have been reached so far, but the U.S. and Iranian science and engineering academies are considering a joint workshop on “lessons learned from recent earthquakes.” One sticking point to better relations has been U.S. technology-transfer bans, which have prevented the IIEES from buying U.S. seismology and structural-engineering equipment. Instead, they order the devices from Japanese and European manufacturers. “It's a shame,” says Fariberz Nateghi, a Columbia University Ph.D. who heads IIEES's earthquake engineering research center. “We spent $6 million on lab equipment in the last couple of years, and we would have preferred to buy U.S.-made instruments because they are better.” But in spite of the lack of U.S. technology, “we've made great progress over the last 10 years,” says Ghafory-Ashtiany, “and we think Iran is much better prepared for the next major earthquake.”

  13. KIP THORNE

    The Shaman of Space and Time

    1. Robert Irion*
    1. Robert Irion is co-author of One Universe: At Home in the Cosmos (Joseph Henry Press, 2000).

    A generation of physicists probing the extremes of gravity can trace its scientific heritage to one man: Kip Thorne of Caltech

    PASADENA, CALIFORNIA—Kip Thorne was on a roll. Speaking before an overflow crowd at the California Institute of Technology, the visionary physicist enticed his listeners with rapid-fire speculations about gravitational waves, quantum gravity, and wormholes. Suddenly, strains of dramatic music swelled from the front row, streaming from the voice synthesizer of a certain wheelchair-using British physicist. “Thank you, Stephen,” said an amused Thorne to his close friend, Stephen Hawking of Cambridge University. “You can tell he's moved by these predictions.”

    Thorne never hesitates to gaze into the future of his field, but this talk was special: It concluded a recent symposium* to mark Thorne's 60th birthday. The meeting brought together nearly 200 experts on gravity at its strongest and strangest: the domains of black holes, colliding neutron stars, and other exotic deep-space objects. Participants came to honor their mentor, who has led the way in converting Albert Einstein's General Theory of Relativity from a purely theoretical science into an astrophysical and observational one.

    They also came to salute Thorne's knack for informing and delighting the public. From the outset of his career—at a time when popularizing science was taboo in some circles—Thorne crafted articles about his bizarre brand of physics. His adventurous prose has garnered two science-writing awards from the American Institute of Physics, including one for Black Holes and Time Warps, his 1994 book that captivated untold numbers of readers.

    The multifaceted guest of honor is a man described by Caltech president David Baltimore as “Caltech's number-one strange scientist, the prince of counterintuitive science.” Richard Isaacson, program director for gravitational physics at the National Science Foundation (NSF) in Arlington, Virginia, goes further: “I think he's recognized as a national treasure.”

    Such words don't rest easily with Thorne, whose modesty is nearly as legendary as his achievements. For instance, he initially refused appointment in 1991 as Feynman Professor of Theoretical Physics at Caltech because he felt unworthy of the intellectual mantle of the late physicist Richard Feynman. “It's obvious in the community that certain people are ever so much more brilliant,” Thorne says. “I'm a cut or two below the Feynman level.” Moreover, when asked for his career highlights, he unfailingly lists the accomplishments of his former students—including some 40 Ph.D. recipients—rather than his own. “I'm not a poor scientist, but if I have achieved some measure of greatness, it is through my students,” he says.

    “That's Kip's basic generosity,” says symposium co-convener Clifford Will of Washington University in St. Louis, who earned his Ph.D. under Thorne in 1971. “He's just as likely to suggest his ideas to his students and postdocs, then let them carry out the work and get 90% of the credit.” Sandor Kovacs, a 1977 Ph.D. who studied how gravitational waves are generated, says that Thorne's students never forget the integrity they observe in his interactions with others. “Even when they are no longer in physics, Kip's imprint and style is evident in what they do today,” says Kovacs, now a cardiovascular physiologist at Washington University.

    Thorne's path in physics began with his upbringing in Logan, Utah. He and his four siblings were the children of two professors, D. Wynne Thorne, president of the Soil Science Society of America, and economist Alison Thorne, founder of the women's studies program at Utah State University in Logan. Thorne's mother fostered his interest in astronomy; they drew a scale model of the solar system on the sidewalk in front of their house when he was eight. Then, he became enthralled with physics through magazine articles and science books. A catalyst was One, Two, Three … Infinity, by Russian-born physicist George Gamow, which Thorne read four times. (Gamow sent Thorne a Hungarian copy of the book before he died in 1968. The inscription reads: “To Kip Thorne, so that he might not be able to read it a fifth time.”)

    A glowing article about Caltech in Time convinced the teenage Thorne that he should study there, and so he did, receiving his B.S. in physics in 1962. But only upon going to Princeton University did Thorne meet the two men who influenced him most deeply. The first was physicist Robert Dicke, whose group meetings Thorne attended. “From Dicke, I learned the connection of relativity to experiment,” Thorne says. By 1970, he observes, “modern technology was finally catching up with the ideas of general relativity. I tried to play a role in helping the experimental field come along.”

    One of Thorne's recent students, postdoctoral researcher Benjamin Owen of the Max Planck Institute for Gravitational Physics in Potsdam, Germany, says that Thorne's early commitment to relativistic astrophysics was daring. “When Kip started this whole business in the 1960s, nobody in his right mind believed in neutron stars, much less gravitational waves,” Owen says. “You had to be truly disturbed to be interested in this stuff, and worse, to corral a bunch of kids into it.” Bernard Schutz, director of the Max Planck Institute and a Thorne alumnus from 1971, notes that the late University of Chicago astrophysicist Subrahmanyan Chandrasekhar avoided general relativity in the 1950s “because he thought it would be the death of his career. Kip started just as the fog was lifting.”

    The second Princeton physicist under whom Thorne blossomed was his adviser: John Archibald Wheeler, a giant in the field and a pioneer of black hole physics. “I learned the power of physical intuition from John,” Thorne says. “He saw things that were decades ahead of his time.” For example, Thorne credits Wheeler with laying the conceptual foundation for attempts to unify quantum mechanics and general relativity into a quantum theory of gravity. “Only 30 years later did this become a widely accepted and serious thing,” he says.

    Thorne absorbed something equally valuable from Wheeler: his gentle shepherding of the work and careers of his students. According to Wheeler, who attended Thorne's symposium at age 89, those lessons took deep root. “Kip has an unusual ability to size up, challenge, and guide people,” Wheeler says. “His most lasting contribution is the development of people who ask and answer significant questions about nature.” Upon hearing the talks by Thorne's protégés at the Caltech meeting, Wheeler simply said, “It makes my own work feel worthwhile.”

    The tutelage of Wheeler and Dicke served Thorne well when he returned to Caltech in 1966. His early career was a blaze of productivity. Thorne and a clutch of students close to his age performed startling analyses of gravity in the most relativistic settings imaginable: dense star clusters, neutron stars, and black holes. Among his first dozen advisees were Will, perhaps the best known theoretical physicist today working on experimental relativity; Richard Price of the University of Utah, Salt Lake City, who did the first calculations of how a nonspinning black hole settles into a stable state; Saul Teukolsky of Cornell University, who did the same thing for rotating black holes; and William Press, who studied black hole perturbations and is now deputy director of Los Alamos National Laboratory in New Mexico.

    This flourishing period led to Thorne's election to the National Academy of Sciences in 1973 at age 33. That youthful honor confirmed Thorne as the central figure in applying relativity to astrophysics, says Caltech colleague Roger Blandford. “Kip translated the seemingly intractable equations of general relativity into practical tools for the astronomer,” Blandford says.

    The same year saw publication of Gravitation, a massive text written by Thorne, Wheeler, and University of Maryland physicist Charles Misner. The book, known simply as “MTW” after its authors, was 5 years in the making. A generation of graduate students came to rely upon it as their bible. The text broke new ground in being a “breezier, populist” book that also contained hard-core physics, Blandford says. “It was a magnificent achievement. It's still surprisingly fresh and useful today.”

    Thorne's early tenure at Caltech also forged an indelible image that his students recall fondly: the bearded, long-haired professor holding forth on campus or in some wilderness getaway in a New Age tunic, often with a funky pendant around his neck. He dresses more nattily now and is known as a “workaholic, type A-plus, detail-oriented calculator and organizer,” says Price. Still, the trademark beard remains. Hawking, with whom Thorne shares a playful relationship, couldn't resist tweaking Thorne's image at the symposium. “You may have noticed that Kip has a slight fuzziness around him,” he said. “It corresponds to the slight probability that he's not really there.”

    Thorne's garb often seemed to match the far-out nature of some of his research. In 1992, the NSF asked Thorne to refrain from using federal funding for research on wormholes—putative links between distant parts of the universe via tunnels through space-time. Thorne wrote one of the first reputable papers on wormhole physics, recalls Eanna Flanagan of Cornell University, a graduate student at the time. However, the subsequent flurry of wormhole papers made NSF nervous about undue public interest in time machines and other H. G. Wellsian notions.

    Thorne's most enduring NSF-funded work, by contrast, is quite concrete: the $365 million Laser Interferometer Gravitational-Wave Observatory (LIGO) now nearing completion. Physicists hope to use LIGO's instruments to detect the subtle disturbances in space-time set off by violent events in the distant universe (Science, 21 April, p. 420). Thorne has led modeling efforts to predict the types of signals LIGO may detect. Seeing the first wave is his dearest dream.

    It almost became a dream deferred. Throughout the 1980s and early 1990s, NSF needed Thorne's sensible scientific voice to help sway skeptical legislators. Thorne also co-directed the project for several years and smoothed the waters during rough administrative transitions. Hawking believes the inevitable moment of detection will write Thorne's legacy in the annals of physics. “I don't think [LIGO] would have happened if he hadn't pushed it so hard,” Hawking says.

    As usual, Thorne's current work on LIGO is ahead of the curve. With his Caltech students, Thorne is examining how to design radical new detectors that LIGO will need after 2010 to see gravitational waves routinely. The research draws heavily on work by Vladimir Braginsky of Moscow State University, one of several Russian physicists with whom Thorne has built close ties during the past 3 decades—even during the latter part of the Cold War. The Russians feel indebted to Thorne for making their work known in the West. “He always emphasized how important the collaborations were for him, at a time when contact was so dangerous,” says Igor Novikov, now at Copenhagen University in Denmark.

    Novikov adds that he literally owes his life to Thorne, who flew him to California in 1988 and arranged for physicians and a private fund to pay for a critical heart operation. “It was Kip's voice that I heard for the first time in my second life,” he says. “He repeated in Russian, ‘It's fine, it's fine.’”

    Novikov's tale is unique, but common threads weave through other stories told whenever Thorne's clan assembles. For example, his alumni note that Thorne took inordinate care to improve their writing. “You would always get back manuscripts that from a distance looked uniformly red,” says Price, who convened the symposium with Will and Kovacs. “His usual statement was, ‘This was superbly written. I just made a few comments.’”

    Those comments went far beyond revised equations. “He was very concerned about both the correctness of the physics and how well it was written,” says physicist and novelist Alan Lightman of the Massachusetts Institute of Technology. Among physicists, only Chandrasekhar spent so much time reading papers critically, Lightman believes.

    Thorne is just as careful with his own writing, which has won fans among both his colleagues and the Star Trek set. His popular prose bubbles with predictions of the ways in which physicists will peel back the layers of intrigue that shroud their understanding of gravity. Some forecasts have become fodder for wagers in a famous series of bets with Hawking and other physicists. Most recently, Thorne lost a 19-year-old bet with Princeton University astrophysicist Jeremiah Ostriker when LIGO failed to detect gravitational waves by 1 January 2000. “I've been wrong about time scales, but not horrendously wrong,” Thorne says with a smile.

    Thorne's latest predictions feature some old chestnuts and some new treats. He maintains that by 2010 to 2015, a planned space-based cousin to LIGO, called the Laser Interferometer Space Antenna (LISA), will map “in exquisite detail” the warping of space-time as one black hole spirals into another. After 2020, he speculates, successors to LIGO and LISA will detect every collision of black hole pairs or neutron star pairs in the universe, at a rate of several per day. “We'll build up a huge catalog, like a catalog of stars on the sky,” he says.

    More broadly, Thorne foresees a 30-year effort to decode the hum of gravitational waves that almost certainly fills the universe, dating from the violence of its birth. This will complement today's analysis of the faint cosmic microwave background radiation, which preserves imprints of the manner in which the universe evolved. “We have good reason to believe that there will be significant gravitational waves from the size of a few meters to the size of the universe, carrying enormous amounts of information about the very early universe,” Thorne says.

    And to the delight of all, Thorne still thinks about wormholes and time machines. “I expect the laws of quantum gravity will reveal that in contradiction to all directions of recent research, the kind of exotic matter that is essential to the existence of traversable wormholes can exist,” he says. But he follows with a letdown: “Constructing traversable wormholes will be unimaginably far beyond the capabilities of any foreseeable human civilization.”

    Thorne looks ahead benignly confident that he will watch his scientific offspring prove him right or wrong. “I come from a long-lived family,” he says. “I expect to live to be 110.” Since his wildest speculations extend about 50 years into the future, there's no telling what Kip Thorne may yet see.

  14. PARASITOLOGY

    Close Encounters: Good, Bad, and Ugly

    1. Elizabeth Pennisi

    Viewed in light of evolution, host-parasite relationships range from deadly to helpful, depending on the communication between them

    PARIS—Louis Pasteur was a man of many disciplines. Over the course of a career that spanned half a century, he ranged from chemistry to microbiology and virology. He even dabbled in the origins of life. His many achievements, from discovering handedness in organic molecules to showing that germs cause disease, are testimony to the power of synthesizing ideas from diverse walks of science. A similar synergy was evident last month at a meeting convened here in Pasteur's honor. As an unlikely mix of virologists, bacteriologists, parasitologists, and molecular biologists—each dealing with different microorganisms in distinct ways—discussed their work, they came to better appreciate evolutionary biologist Theodosius Dobzhansky's observation that nothing makes sense except in the light of evolution. Yet, they also lamented, evolution is often considered outside the bailiwick of microbiologists, particularly those studying infectious diseases.

    Microbiologists often focus on one organism and its relationship to its host at one point in time. But stepping back to view the whole range of relationships between microbes and their hosts reveals that “there's a spectrum [of microorganisms] from the highly virulent to barely pathogenic,” says Stephen Beverley, a molecular parasitologist at Washington University in St. Louis. What's more, the host-parasite relationship changes over time, often shifting from adversarial to friendly, and this relationship “is at different stages for different pathogens,” adds Beverley. The best way to understand evolution's pivotal role in shaping these relationships, agrees Simon Wain-Hobson, a virologist at the Pasteur Institute in Paris, is to monitor pathogen-host interactions through time. And although some researchers, including Wain-Hobson, are now doing this, too many are not, he says.

    Taking this long-term view reveals the incredible capacity of the microbial genome to change—gaining and losing nucleotide bases, entire genes, or even sets of genes. Research is also showing that much of that change is shaped by the microbe's interactions with its hosts and its environment. “At the start [of the interaction], you see genetic changes,” explains Jörg Hacker, a microbiologist at the University of Würzburg in Germany. These changes might enable one bacterium to ravage a gut cell, say, or another to disarm a host's defenses. Those alterations in turn trigger changes in the host. Eventually, “you have this kind of cross talk,” he explains, leading to perpetual evolutionary one-upmanship or harmonious rapprochement.

    The good

    Stanford University molecular biologist Sharon Long has been exploring a cooperative interaction that might once have been an adversarial one: the symbiosis between a plant and its nitrogen-fixing bacteria. “Her work gives us insights into the signaling between the bacterium and the plant,” says microbiologist James Kaper of the University of Maryland, Baltimore. Over the eons, the partners have evolved a way to recognize each other's signals, thereby sidestepping the damage that typically results from many plant-bacteria interactions.

    Like many plants, the legumes pea clover and alfalfa make compounds that sometimes serve as pigments or as deterrents to either insects or microbial pests. But to the Rhizobium bacterium, those compounds—in this case, substances called flavonoids—are perfume that help identify its future partner. As Long and others have shown, the relationship has evolved so that one flavonoid plays an ever more intimate role. Once inside the bacterium, this flavonoid turns on a set of microbial genes called nod. In response, the microbe generates a small carbohydrate, tailoring its exact chemical makeup so it will be recognized by the plant producing that flavonoid. Without these chemical modifications, the carbohydrate might trigger the plant to let loose its defenses, as this same basic carbohydrate is also produced by pathogenic fungi. But the tailored version “is specifically recognized by the host plant,” says Long. The plant then prompts root hairs to grow and entangle the nearby Rhizobium bacteria, forming the characteristic root nodules that enablelegumes to “fix” nitrogen.

    As the plant becomes an ally in promoting this infection, it builds a tunnel of extracellular material around the dividing bacteria and allows them entry into some of its cells. Studies suggest that decisions about which plant cells are invaded—and, later, when and how diversification of some bacteria into nitrogen-fixing units proceeds—are under joint control of the two partners. Each step, “recognition [by] and escape of the host's defenses, will be played out many times over” to maintain the plant-microbe alliance, Long adds.

    The bad

    The alliance between the gut bacterium Escherichia coli and its human host isn't as mutually beneficial as that between Rhizobium and plants, but it's usually amicable enough. With some strains of E. coli, however, the relationship can get downright nasty. Harmless E. coli can acquire new genes, evolving into virulent organisms that cause a variety of symptoms. Kaper, who studies pathogenic E. coli, has preliminary evidence that these strains may also co-opt the communication between harmless gut E. coli and their human hosts. “My work, along with that of others, has shown how an organism that is typically considered nonpathogenic can acquire virulence traits to become not just one kind of pathogen but several,” Kaper explains. Take the pathogenic E. coli strain O157:H7. This strain has mastered the insidious trick of acquiring genes from other microbes, including a toxin from Shigella, that enable it to invade and destroy the cells in the human gut. Such genetic acquisitions can have devastating effects; in some circumstances, E. coli O157:H7 can kill.

    Pathogenic E. coli can also exert their gut-wrenching effects by taking cues from other E. coli strains. Both normal and pathogenic E. coli often aggregate, homing in on ever stronger concentrations of a chemical called Al-2, which they secrete to guide them to their fellow bacteria, at which time they, too, start producing Al-2. In test tube experiments over the past several years, Kaper has determined that pathogenic E. coli require a strong Al-2 signal—usually indicative of large numbers of the microbes in one place—to activate their attack genes. Last year, Kaper showed that the pathogenic E. coli can add the Al-2 signals produced by the gut's normal E. coli to their own so they can do their dirty work even when their actual numbers are small.

    Moreover, preliminary evidence indicates that the gut itself may provide a boost to its attacker. Kaper and his colleagues engineered pathogenic E. coli that lacked the gene needed for making Al-2. Then they put the mutants in a culture dish with human epithelial cells derived from tumors. To their surprise, the microbes began their attack, even though there were no E. coli present to supply the Al-2 quorum signal. And that, says Kaper, suggests that the human cells secrete an Al-2-like signal. “Just like bacteria frequently seize upon [and use] host signals, the host may be able to seize upon bacterial signals and manipulate them,” says Beverley. He suspects that the human gut may somehow regulate harmless E. coli by secreting Al-2 and, as an unfortunate consequence, the gut may now play a role in stimulating the pathogen's attack. “There's so much that's important in terms of communication [among bacteria], and then we add another layer [of communication] with host cells,” he adds, intrigued by how complex the interactions are turning out to be. “More attention needs to be paid” to these connections.

    The ugly

    The nasty interaction of E. coli O157:H7 with the human gut and the constructive partnership between plants and Rhizobium represent two extremes. Many evolutionary biologists see the host-microbe relationship as a continuum; even the most destructive pathogens often shift to less virulent forms over time. From an evolutionary perspective, the pathogen's goal is to pass on its genes, explains Wain-Hobson. And sometimes those genes are passed on most efficiently when the host lives longer or is less sick—when the relationship may be a bit ugly but, ultimately, not disastrous for the host.

    Take Leishmania, a protozoan parasite that now infects some 12 million people worldwide, often causing disfiguring disease and sometimes death. At the meeting, Beverley described how one of the organism's genes seems to be designed to hold virulence in check, at least in mice; this is one of the first well-documented examples of a gene that moderates virulence, he adds. This gene converts a key nutrient needed by the parasite into a biologically active form. But it also seems to keep the number of infective forms of the parasite down, thereby limiting its damage.

    Beverley and his colleagues first stumbled upon this gene in 1984 when they were looking for drug targets. In hope of eventually disabling Leishmania, they were studying how the parasite takes up and makes use of two essential vitamin-like substances, folic acid and biopterin. One drug candidate was a protein called pteridine reductase (PTR1), which converts biopterin into a biologically active form. Recently, they knocked out the gene, expecting to cripple the parasite and limit infection in the mice they tested. Instead, “we got 50-fold more parasites,” says Beverley. It turns out that Leishmania with intact PTR1 genes produce fewer infective forms than those without the gene. “PTR1 is a candidate gene that limits parasite virulence and pathology,” Beverley concludes.

    Many bacteria on the run from antibiotics or immune systems prove to be not only moderately virulent but also highly mutable, says François Taddei, an evolutionary biologist at the University of René Descartes—Paris V. Populations of pathogens often include a few fast-evolving individuals, so-called mutators. These are organisms with DNA repair enzymes or genome quality-control mechanisms that don't work all that well, so they mutate up to 1000 times faster than normal individuals do.

    These mutator strains have long fascinated Taddei: Theoretically, they should accumulate so many genetic defects that they die off quickly, but they persist. Furthermore, his computer simulations suggest that when populations of bacteria—in this case, E. coli—are challenged by environmental stresses such as exposure to antibiotics or a new host, the proportion of mutators increases. This, in turn, speeds the evolution of organisms adapted to that change, such as drug-resistant strains. This model also suggests that pathogenic organisms—which are continually under siege by the host or by antibiotics—have a higher proportion of mutators than their nonpathogenic cousins.

    Experimental results are bearing out this assumption. A network of eight labs coordinated by Taddei looked for mutator strains of different bacteria in some 900 samples taken from a wide range of environments: water, soil, healthy or sick people, and wild animals. The percentage of mutators among commensal organisms, which don't cause disease, was 3%. But among pathogens, mutators were much more in evidence. Fully 12% of the bacteria involved in certain infections had these high mutation rates; another research group found that 20% of the bacteria in the lungs of cystic fibrosis patients were mutator strains.

    In mouse studies, infections caused by bacteria with a high proportion of mutators tend to be less deadly than those in which the number of mutators is low, notes Taddei. But mutator-rich strains are nonetheless harder to treat. Working with mice, Taddei was unable to cure an infection caused by a mutator-rich strain with one antibiotic; even trying a second antibiotic proved ineffective. Instead, it took an all-out assault with several drugs simultaneously. Together, these data suggest that evolutionary principles could help design effective therapeutic strategies, especially against chronic infections that seem to resist current treatment. “If you have a mutator strain, you better use a very potent cocktail of antibiotics,” Taddei suggests.

    This advice runs counter to the way physicians tend to prescribe antibiotics but makes sense in light of evolution, says Wain-Hobson. He'd like to see more of his colleagues share Taddei's appreciation of how organisms change through time. “One hundred and forty years of microbiology without evolution is more than enough,” he notes. Pasteur, too, would probably agree.

  15. Atom-Scale Research Gets Real

    1. Robert F. Service

    Nanotechnology has spawned useful new materials and impressive technical feats. But the field's most grandiose dreams, and nightmares, remain the stuff of fiction

    In September, speaking to an audience of prominent researchers and government leaders at a meeting on the future impacts of nanotechnology, Michael Crow crossed a familiar line. “How many of you have read The Diamond Age?” the science policy expert and vice provost of Columbia University in New York City asked his listeners. The book, by science-fiction novelist Neal Stephenson, depicts a near-future world in which advances in nanotechnology make it possible to build essentially anything from scratch, atom by atom, leaving society to sift through the cultural ramifications of limitless choices. In response to Crow's question, a few hands fluttered in the air and quickly dropped. These people, after all, were nanotechnology's practitioners—not wide-eyed dreamers.

    Crow wasn't promoting the book as a prophecy. He mentioned it, he says, to encourage researchers to think about their unique position at the dawn of a field that most in the room agreed will be a force in the coming century. But clearly his question touched a nerve. Since its inception, nanotechnology has been dominated by fiction, both unabashed hype and better grounded hope. Science-fiction writers have dreamt up lush scenarios of life with nanosized robots that do everything from building gleaming cities to reaming out clogged coronary arteries. Meanwhile, some researchers have been hard at work drawing impressive but equally fictional blueprints for arrays of tiny gears and pistons that could form the foundation of nanomachines—if anyone ever figures out how to assemble the thousands of atoms needed to build them.

    This year, dystopian fiction came on the scene, as nanotech prophets of doom made bold new pronouncements of nanotechnology's potential to destroy humanity and called for either an end to research in the area or new guidelines to ensure that researchers don't accidentally wipe out life on the planet (see sidebar, p. 1526). “Nanotechnology seems to have this wonderful property of having the most extravagant favorable and unfavorable predictions,” says Edward Tenner, a historian of science at Princeton University in New Jersey.

    In response, many researchers at the cutting edge of dealing with matter on the near-atomic scale have become aggressively matter- of-fact, squirming at the suggestion of cornucopian nanofactories or even humbler mass-produced nanodevices. The nearby hurdles, they say, are challenge enough. “Nanotechnology is a vision, a hope to manufacture on the length scale of a few atoms,” says Don Eigler of IBM's Almaden Research Center in the hills above San Jose, California. For now, he says, “nanotechnology doesn't exist.”

    Yet even the most hard-headed nanoscientists must admit that their minute world, by whatever name, is on the move. Advances in manipulating nanosized materials have already led to improvements in computer data storage, solar cells, and rechargeable batteries. Computer disk drives alone—which rely on controlling the thickness of various layers of material on the nanometer scale—account for a multibillion-dollar market. The field's promise has prompted the creation of about a dozen nanotech research centers in U.S. universities alone. The European Community runs several programs in nanotechnology, including the Nano Network, which contains 18 member research centers working in nanomaterials synthesis. Japan, Singapore, China, Australia, Canada, Germany, the United Kingdom, and Russia all support nanotechnology efforts.

    And there is more to come. Last month Congress approved the bulk of the Clinton Administration's request for money to launch a new National Nanotechnology Initiative. Next year the initiative will spend some $423 million on nanoscience, with more sure to follow. In Japan, nanotechnology funding is slated to jump 41% next year to $396 million. Several European countries are ramping up efforts as well.

    Some U.S. experts hope that the surge in funding will carry over to bolster American research in the physical sciences as a whole. Whereas Congress is 3 years into a 5-year effort to double biomedical research funding at the National Institutes of Health, “support for physical sciences and engineering has been stagnant,” says Tom Kalil, a White House specialist on the economics of technology. “We see [nanotechnology] as a way [of] increasing support for physical sciences and engineering.”

    Richard Smalley, a Nobel Prize-winning chemist, thinks the emerging field may even reverse the long slide in the number of new students choosing careers in science, just as the space race inspired an earlier generation. “It was Sputnik that got me into science,” Smalley says. “Of all the impacts of [nanotechnology's rise], the most important impact—and one that I dearly hope will happen—will to be to get more American girls and boys interested in science.”

    All of this activity is leading to some rather grandiose pronouncements. “This is an area that can have a huge potential payoff that can be as significant as the development of electricity or the transistor,” says Kalil. And at a 1998 congressional hearing, Neal Lane, the president's adviser for science and technology and former National Science Foundation (NSF) director, stated, “If I were asked for an area of science and engineering that will most likely produce the breakthroughs of tomorrow, I would point to nanoscale science and engineering.”

    But however promising nanotech's future may be, transforming it from the world of fiction to reality will mean overcoming some daunting obstacles. Beyond manipulating atoms, nanoscientists must perfect ways to mass-produce nanosized objects and integrate them with the larger, human-scale systems around them. And they must do it while working in an interdisciplinary field that requires new levels of cooperation among different specialties, raising familiar challenges of herding academic cats and coaxing them to march in lockstep.

    A new hammer

    Of course, nanotechnology—in the guise of nanoscale materials—has already been around for a long time. For the last 100 years, tire companies have reinforced the rubber in car tires by adding nanosized carbon particles, called carbon black. And living organisms from bacteria to beetles rely on nanosized protein-based machines that do everything from whipping flagella to flexing muscles.

    Today, the term “nanotechnology” refers most broadly to the use of materials with nanoscale dimensions, a size range from 1 to 100 billionths of a meter, or nanometers. Because this range includes everything from collections of a few atoms to those protein-based motors, researchers in chemistry, physics, materials science, and molecular biology all lay stake to some territory in the field. That tends to make nanotechnology a scientific Rorschach blot: What it includes depends on whom you ask.

    “Nanotechnology is a wonderful umbrella term that takes into account many things that we were doing before some very helpful tools came along,” says William Tolles, a nanotechnology consultant for the U.S. Department of Defense. “To a 5-year-old with a hammer, the world looks like a nail,” adds Lester Lave, an economist at Carnegie Mellon University in Pittsburgh, who studies the development of technology. “Nanotechnology is a hammer, and nanotechnologists are looking around to see what they can hit with it.”

    The development of that hammer was launched by a breakthrough in manipulating atoms back in the early 1980s. In 1982, physicists Heinrich Rohrer and Gerd Binnig of IBM's Zurich Research Laboratory in Switzerland created a new type of microscope, called a scanning tunneling microscope (STM), that is capable of imaging individual atoms. By tracking the changes in a tiny electrical current from an ultrasharp tip to atoms on a surface, Binnig and Rohrer's STM enabled researchers, essentially, to feel their way along a surface at the atomic level and create a computerized image of it atom by atom.

    Other imaging tools followed close behind. In 1985, Binnig teamed up with Calvin Quate, an electrical engineer at Stanford University in Palo Alto, California, to make an atomic force microscope, which enabled them to image surfaces that do not conduct electricity. And since then, versions of these microscopes have been developed to pinpoint atoms' magnetic and chemical signatures.

    It wasn't long before researchers used their new tools to jump from imaging atoms to manipulating them. In 1990, Eigler and Erhard Schweizer, also of IBM's Almaden Research Center, used an STM to spell out “IBM” with 35 xenon atoms atop a nickel surface. It was the first time scientists had built something by manipulating individual atoms. Since then, Eigler and colleagues have gone on to build a series of atomic-scale corrals that reveal the wavelike nature of atoms and their electrons for all to see. “Seeing the electrons in their quantum state seems to have had a larger psychological effect than the bare bones of the research itself,” Eigler says.

    Part of that psychological effect lay in convincing researchers that they could build structures an atom at a time. It's an idea that continues to spread. Last year, Wilson Ho, a chemist and physicist at the University of California, Irvine, showed that he could use an STM to help forge chemical bonds between iron atoms and carbon monoxide molecules. Other researchers have used similar techniques to alter the chemistry of silicon atoms on a surface, transforming them into a key component of a transistor.

    Early advances have gone beyond manipulating atoms. Groundbreaking work in materials synthesis has given researchers the ability to control the size and shape of a wide variety of materials at the nanoscale. Along the way, researchers discovered that in many cases the large surface-to-volume ratio of nanoscale materials gives them unique characteristics not shared by their bulk-sized cousins. Nanosized crystallites made from semiconductors such as cadmium selenide, for example, fluoresce in different colors of light as they change sizes. That's already made them a hot property for use as fluorescent “dyes” in biology experiments, and several companies are now racing to commercialize the technology.

    The large surface area of nanoparticles also makes them ideal catalysts, whose surface atoms orchestrate chemical reactions. Whereas bulk gold, for example, is unreactive at room temperature, 3- to 5-nanometer gold particles can promote a number of common reactions and have already been developed commercially by a Japanese company as bathroom “odor eaters.”

    Enhanced properties on the nanoscale continue to be discovered. A number of companies are experimenting with spiking common plastics with nanosized particles in order to bolster properties such as strength and impact resistance. Nanosized probes are being developed to detect biological weapons such as anthrax. And carbon nanotubes—tiny, straw-shaped molecules a mere nanometer or so across—have been shown to conduct either like metals or semiconductors depending on their precise geometry, and they have already been incorporated into a range of electrical components such as transistors and diodes.

    Reality gap

    For most nanobased applications, the key to progress is straightforward: Find ways to make very fine particles or layers of material of a precise size, which, when incorporated directly into a final plastic or solar cell, all share the same electronic, optical, and mechanical properties. These simple products are already finding their way into the marketplace, and their relative ease of production might always ensure them the biggest share of the business.

    Most of the buzz about nanotechnology, however, involves more sophisticated applications of nanomaterials, such as electronic devices and tiny chemical sensors. The holdup, so far, is that in most cases there's no obvious way to transform single demonstration devices into a working technology. “Nanotechnology is an area that is profoundly reductionist,” says Harvard University chemist George Whitesides. “We can pick matter apart at its basic level of the atom and reassemble it.” But researchers, he warns, mustn't take that ability too seriously. “We want to be sure we don't fall completely over that cliff.”

    Whitesides's point is that although it is possible to manipulate individual atoms, it's much harder to do it on a grand scale. In 1998, for example, researchers led by Cees Dekker at the Delft University of Technology in the Netherlands reported making the first transistor using a carbon nanotube as a key component of the device. Work since has shown that the electronic performance of such transistors can approach or even surpass that of conventional silicon transistors. “But there is a problem here,” says Tom Theis, who heads physical sciences research at IBM's Thomas J. Watson Research Laboratory in Yorktown Heights, New York. When it comes to making computer chips containing millions of such devices, “it's completely unmanufacturable.”

    The problem of manufacturability remains nanotechnology's Achilles' heel, particularly for the much-hyped possibility of creating nanosized machines. “The technology is still almost wholly on the drawing board,” John Seely Brown writes in a research paper submitted to the September NSF meeting. Brown, who heads the famed Xerox Palo Alto Research Center in California, points out that two of the main proponents of nanomachines, Ralph Merkle and K. Eric Drexler, built powerful nano-CAD tools and then ran simulations of the resulting designs. “The simulations showed definitively that nano devices are theoretically feasible,” Brown writes. “But theoretically feasible and practically feasible are two different things. And as yet, no-one has laid out in any detail a route from lab-based simulation or the extremely elementary nanodevices that have been chemically constructed to practical development.”

    But others argue that visionary research serves a purpose, too. Even if nanogears and pistons cannot be built yet, says Deepak Srivastava, who heads the computer nanotechnology design group at NASA's Ames Research Center in Moffett Field, California, the computer designs still help focus experimentalists on what's worth looking for: “If the ideas are based on real physics and chemistry, one has to know the real possibilities.”

    And of course, experimental science is constantly expanding the scope of what is feasible. Whitesides and Stephen Chou of Princeton University have recently pioneered a new rubber stamping method for patterning surfaces with features as small as 10 nanometers. That is well below the current size limit of about 200 nanometers faced by photolithography, the primary patterning tool used by the computer chip industry. Still, the stamping technique has its own drawbacks: It has trouble patterning multiple materials in three dimensions, as is needed for making computer chips, and ensuring proper alignment of all the various layers of material.

    Another patterning alternative making headway is a burgeoning subfield of chemistry known as self-assembly, in which researchers design materials to assemble themselves into desired finished structures. For example, last year IBM researchers led by chemist Christopher Murray came up with a way to make metallic particles as small as 3 nanometers and then assemble them into a three-dimensional array. Such structures could lead to material for future computer disks in which each nanoparticle stores a bit of data. Still, for now such successes tend to be the exception rather than the rule.

    Future nanoapplications face other grand challenges as well. Even if particular nanocomponents can be mass-produced, researchers will still need to figure out how to position them on surfaces or other structures so they can be used as components in electronic devices, sensors, and the like. For tiny electronic components, researchers will then face the major stumbling block of wiring them up to the macro world.

    They will also confront the more mundane challenge of connecting with one another. By all accounts, nanotechnology will require an extraordinary range of expertise. Researchers have long embraced the concept of interdisciplinary research. And organizations such as the NSF make it a point to finance interdisciplinary centers. Still, academia remains largely hidebound in disciplines, making it difficult to pursue research that falls between traditional fields. “There still exist many elements in the culture of our research universities that discourage multidisciplinary research,” says James Merz, the vice president for graduate studies and research at the University of Notre Dame in Indiana.

    Among the chief culprits Merz points to are the administrative autonomy given to separate departments and the fact that faculty members must obtain tenure from specific departments. Furthermore, Theis points out, essentially no curricula have been developed to train future researchers in the field, let alone degree programs to turn out new nanotech Ph.D.s. Although those impediments aren't necessarily fatal, they can easily hamper the field's development, Merz says.

    Beset by such challenges, nanoreality is bound to fall short of nanohype. The danger is that disenchantment with the gap could dampen financial support for the field, says Mihail Roco of NSF, who heads the U.S. National Nanotechnology Initiative. That's a scenario well known to researchers in high-temperature superconductivity, an enterprise that has struggled to live up to the fanfare that greeted it in the mid-1980s. Still, unlike superconductivity—a narrow field whose impact is limited to a comparatively small sphere of applications—nanotechnology is likely to benefit from its breadth, says Srivastava. “Since the net is much wider,” he says, “the chance is bigger that you will catch some fish.”

  16. Coaxing Molecular Devices to Build Themselves

    1. Dennis Normile

    NAGOYA—When Nagoya University chemist Makoto Fujita thinks about how to store data in tiny spaces, he thinks big—about how nature handles the problem. Then he tries to emulate the answer. “A living cell is incomparably smaller than a compact disc, but its DNA carries far more information,” he says. “Handling data at the DNA's molecular scale could lead to handheld supercomputers. But to do that we need more efficient ways to manipulate molecules.”

    Fujita and others are doing exactly that through a technique called directed self-assembly. By adroitly exploiting the chemical and electrical bonds that hold natural molecules together, they can get molecules to form desired nanometer-scale structures. This could lead to computer logic and memory devices up to 100 times smaller than their current counterparts. Self-assembled molecules might also serve as cages to hold and deliver unstable medical compounds, and as crucibles for chemical reactions.

    The technique builds upon two properties of matter. One is the bonding between hydrogen atoms, which holds the two strands of DNA in its double helix. The other is the electrical attraction between positively charged organic ions and negatively charged metal ions. The organic ions are strategically placed on organic molecules, or ligands, which are like the rods of Tinkertoy sets. The metal ions are like the socketed disks that hold the rods together. Unlike Tinkertoys, however, these metal ions and ligands assemble themselves if mixed together in solution in the right proportions and under the right thermodynamic conditions.

    Fujita was among those who pioneered the metal-ion technique in the early 1990s. His first construction used four simple linear ligands and four metal ions to produce square macromolecules. More recently, his group has built an eight-sided, three-dimensional structure made up of two pyramids joined at their bases. The molecule is 3 nanometers across, and the cavity is big enough to hold a C60 molecule, the so-called buckyball. Fujita and other researchers have also created a variety of grids, tubes, cages, and catenanes, which are rings interlocked like links of a chain. “Such structures cannot be synthesized by conventional chemical reactions, but we can very easily construct them using directed self-assembly,” Fujita says.

    J. P. Sauvage, a chemist at Louis Pasteur University in Strasbourg, France, has proposed using catenanes as computing devices. One ring would be mechanically fixed, allowing the second ring to rotate. If the second ring has an additional ion, it could be rotated 180 degrees back and forth in response to an adjacent electrical charge. The position of the ring would indicate the 1 and 0 of digital data.

    The molecular cages can be constructed to have small openings through which atoms can enter and react with other atoms, forming molecules that are too big to escape. This arrangement, which can capture and stabilize molecules that otherwise rapidly react and disappear if free in solution, could be the basis for a drug-delivery system.

    Donald Cram, a Nobel laureate in chemistry who is now retired from the University of California, Los Angeles, used such a cage to capture cyclobutadiene, a compound that briefly appears at intermediate stages of some chemical reactions but which until then had been too unstable for chemists to isolate. Julius Rebek, director of the Skaggs Institute for Chemical Biology of the Scripps Research Institute in La Jolla, California, who works on self-assembly using hydrogen atoms, explains that the characteristics of cyclobutadiene had been theoretically predicted but never confirmed. “By encapsulating the molecule, [Cram kept it] stable long enough to be characterized by nuclear magnetic resonance,” he says. Aside from its value to basic research, this trick could be used to deliver drugs that are hard to stabilize.

    Fujita predicts that the number of self-assembled molecules and their uses will rise rapidly in the years to come. Whereas a decade ago there were only a handful of groups working on self-assembly using metal ions, he says, “now there are dozens. The field is advancing very rapidly.”

  17. Is Nanotechnology Dangerous?

    1. Robert F. Service

    The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.

    Bill Joy, co-founder of Sun Microsystems, in the April issue of Wired.

    Bill Joy is nobody's Luddite. As co-founder and chief scientist of Sun Microsystems in Palo Alto, California, he can match technophile credentials with anybody on the planet. So when he argued that research into nanotechnology and other fields should be stopped before it wipes out humanity, humanity took notice.

    To the legion of chemists, physicists, and materials scientists who spend their lives trying to understand and manipulate matter at the nanoscale, Joy's warning—published in the April issue of Wired magazine—felt like a bucket of ice water poured over their heads. Others had raised similar concerns for decades. But Joy's status among the digerati lent his allegations new heft. His fears of nanotechnology, genetic engineering, and robotics research were broadcast worldwide by news organizations including the Los Angeles Times and CNN.

    Other voices joined the chorus of woe. In June, a group of nanotechnology aficionados released what they called the Foresight guidelines. Like Joy, they raised the specter of nanotechnology out of control. But rather than simply calling for a halt to research, they outlined measures they said would encourage governmental oversight of nanotech's development. Such supervision, they argued, could help prevent accidental catastrophe, much as the National Institutes of Health's guidelines on recombinant DNA technology helped the emerging biotech industry avoid accidental releases of genetically modified organisms.

    At first, stunned nanoscience researchers quietly shrugged off the concerns. But more recently, they've begun to fight back, arguing vehemently at meetings that what Joy and others fear is at best implausible and more likely plain wrong. “The research community needs to divorce itself from the lunatic fringe,” says Steven Block, a biophysicist at Stanford University in Palo Alto.

    These fears date back to the 1986 book Engines of Creation, by K. Eric Drexler. In it, Drexler, a theoretician and chair of a nanotech think tank called the Foresight Institute, paints a picture of utopia that will result from the coming age of nanotechnology, a time when miniature “assemblers” will run atomic-scale assembly lines, fabricating virtually any imaginable product from the ground up, be it a car, a carpet, or the steak for your grill. But Drexler also applies his vivid imagination to nanotechnology's potential downside. Of particular concern is something he dubs the “grey goo” problem, in which assemblers replicate themselves ad infinitum, consuming everything in their path, including plants, pets, and people.

    In his Wired article, Joy admits that on the advice of scientist friends, he initially dismissed Drexler's prophesies of nanoboom and nanodoom. But last summer, he says, he learned that hypothetical pieces of futuristic tiny machines were falling into place. One component of such assemblers—molecule-sized electronic devices—was “now practical,” Joy writes. Further-more, he learned that self-replication—a feared component of nanomachines running amok—has already gone beyond biological systems: Researchers had shown that simple peptide molecules can catalyze their own reproduction. Although perhaps not around the next bend, self-reproducing nanomachines were becoming all too plausible, Joy concluded. And that spelled danger.

    But all it really spells, Block and others say, is a flawed extrapolation of current capabilities into the future. “This has all the depth of a parking lot puddle,” says Block. It's simply incorrect to make the logical leap that, just because simple molecules can reproduce, scientists and engineers will be able to construct complex nanomachines that do the same thing. “Nobody has a clue how to build a nanoassembler, much less get one to reproduce,” Block says. Biological systems, of course, reproduce. But they are both far larger than the nanoscale and fantastically complex, with separate systems to store and copy genetic information, produce energy, assemble proteins, transport nutrients, and so on. Viruses, by contrast, are nanosized, points out Viola Vogel, a nanoscientist at the University of Washington, Seattle. But they can reproduce only by co-opting the machinery of living cells. “Even nature did not make a nanoscale structure that can self-replicate,” she says.

    Richard Smalley, a Nobel Prize-winning chemist at Rice University in Houston, Texas, says that there are several good reasons to believe that nanomachines of the sort imagined by Drexler and company can never be made. “To put it bluntly, I think it's impossible,” Smalley says. As he sees it, the idea of little machines that grab atoms and assemble them into desired arrangements suffers from three faults. First, he says, it's wrong to think you can just manipulate an individual atom without handling the ones around it as well. “The essence of chemistry is missing here. Chemistry is not just sticking one atom in one place and then going and grabbing another. Chemistry is the concerted motion of at least 10 atoms.” That means to move that one atom where you want it, you'll need 10 nanosized appendages to handle it along with all of its neighbors.

    Which raises the second problem—what Smalley calls the “fat fingers” problem. A nanometer is just the width of eight oxygen atoms. So even if you're trying to build something hundreds of nanometers in size, “there's just not enough room” in that space to fit those 10 fingers along with everything they are trying to manipulate. Finally, there's the “sticky fingers” problem: Even if you could wedge all those little claspers in there with their atomic cargo, you'd have to get them to release those atoms on command. “My advice is, don't worry about self-replicating nanobots,” says Smalley. “It's not real now and will never be in the future.”

    In an e-mail exchange, Joy replies that he agrees that at the moment the task of making nanobots seems implausible. “No one denies it's beyond the state of the art today, but many people see that this isn't sufficient, in an era of such rapid progress, to allay our concerns.” Twenty or 30 years in the future, he argues, some combination of a chemical self-assembly process and a bit of directed placement of other atoms could create synthetic organisms that prey on cells just as viruses do. And although researchers may raise objections to specific schemes for making nanobots, “where is the reasoned argument that says this is impossible?” Joy asks. “Must we have a demonstration of the danger—a grey goo accident—before we act?”

    But this focus on what is possible is beside the point, says Irwin Feller, who directs the Institute for Policy Research and Evaluation at Pennsylvania State University, University Park. “All things may be possible,” Feller says. “But society will not place equal value on all things.” Or equal concern, for that matter, if the things are considered wildly improbable.

    Which is not to say there are not reasons to be concerned about nanotechnology. At a meeting sponsored by the National Science Foundation in September, representatives from the research community, think tanks, and government funding agencies huddled to discuss emerging concerns for the field. Rather than nanobots, the greatest issues were social: Is the educational system up to the task of training enough nanotech workers? Could nanoscience's progress in areas such as electronics undermine traditional businesses on which thousands of jobs depend? Could the decreasing cost of tools for doing nanoscience and molecular biology make it easier for terrorists or other small groups to engineer dangerous microbes? Such concerns, the researchers concluded, are very real indeed.

    But that wasn't all. The biggest problem nanotechnology could face down the road is public acceptance, said Richard Klausner, head of the National Cancer Institute. Klausner wasn't worried about nanobots. Rather, he argued that nanoscience is exploring numerous revolutionary medical applications, such as creating implantable sensors to watch for the signature molecules of cancer. But unless patients are aware of the development of such tools beforehand, many of them may balk at having their bodies invaded by technology.

    To better understand such concerns ahead of time, researchers need to involve outsiders in the development process, much as AIDS activists helped set priorities for research on AIDS drugs, says Klausner. “I think that didn't happen very effectively over the last 10 years with genetically modified organisms,” he adds. And that's a danger that nanotechnology developers would like to avoid.

  18. Powering the Nanoworld

    1. Robert F. Service

    Whereas most scientists who long to produce nanoscale machines might look to Henry Ford for inspiration, Devens Gust seems to have his sights set on John D. Rockefeller. Today, researchers are making rudimentary nanomachines by harnes-sing biological motor proteins —such as those involved in muscle contraction—and plunking them down on surfaces in hopes of getting them to do some new types of work. That work takes energy. “If you're going to do this, you will need fuel,” says Gust, a chemist at Arizona State University in Tempe. And like Rockefeller, whose Standard Oil provided the juice that launched Ford's automotive revolution, Gust is ready to prime the pumps.

    Gust and a handful of colleagues have built tiny refineries that convert the energy in sunlight to chemical fuel. The fuel in this case is adenosine triphosphate (ATP), the same energy-rich molecule that powers chemical reactions inside cells. At last August's meeting of the American Chemical Society (ACS) in Washington, D.C., Gust reported that he and his colleagues had collaborated with other groups to run their protein-based molecular machines on little more than sunlight. “They're like GM [General Motors] and Ford, and we're like Exxon,” Gust says.

    Gust didn't start out to make the world's smallest gas stations. For much of the past 15 years, he has worked with Arizona State colleagues Thomas and Ana Moore and numerous students to mimic nature's ability to harvest light energy—an ability on which nearly all life depends either directly or indirectly. In 1997, Gust and the Moores reported in Nature that they had developed a unique photosynthesis mimic inside the protected confines of liposomes, spherical membranes made from two layers of fatty lipid molecules. Spanning those lipid membranes are three-part molecules called artificial reaction centers, after the apparatus in chlorophyll that allows plants to absorb sunlight and put that energy to work. In this case, a square-shaped porphyrin group absorbs light, which kicks an electron out of its normal ground state and into a higher energy level, leaving behind a positively charged electron vacancy called a hole. The electron and hole are then snapped up by a pair of chemical groups in the reaction centers, separating the charges and creating a chemical potential. In a multistep process, the electron-grabbing molecule then uses this charge separation to shuttle protons from the outside to the inside of the liposome.

    Light-harvesting bacteria and plants use a similar buildup of protons and a protein called ATP synthase to generate ATP. And following nature's lead, in another Nature paper in 1998, the Arizona State researchers showed that they could incorporate ATP synthase proteins inside their liposomes and generate ATP. As the protons pass through the ATP synthase molecule to the outside of the liposome, they cause the protein to spin, a mechanical motion that helps create ATP, which is dumped outside the liposome.

    That set the stage for powering nanotech devices. One such set of devices—nanopropellers—is being developed by Carlo Montemagno of Cornell University in Ithaca, New York, and is reported on page 1555 of this issue. Montemagno and his colleagues also use ATP synthase, which harbors a tiny shaft that spins inside a cylinder. But in this case they anchor copies of the protein rotor on surfaces. They then fuse tiny metal bars to the top of the shaft, creating what looks like a nanoscale version of a helicopter blade that rotates when it's fed ATP.

    To set these minichoppers spinning, Montemagno and his colleagues normally just spike a solution by covering them with ATP. But if nanomachines are ever to have a more independent future, researchers will need simpler ways of providing them with energy. So Gust recently teamed up with Montemagno to supply the ATP-generating liposomes. By merely adding them to the nanocopter solution and then shining light on them, the researchers showed that they could set the blades spinning. Michael Therien, a chemist at the University of Pennsylvania in Philadelphia, says he's impressed with the work. “It's a first step to a purely engineered system” of nanomachines that can work without human intervention, he says.

    Gust has also begun teaming up with Viola Vogel and colleagues at the University of Washington, Seattle, to help power a series of nanoshuttles that also make use of ATP-driven biological motors to do the work. Finally, Gust's team has adapted its artificial photosynthesis scheme to do what plants do best: convert CO2 into more complex molecules. At the ACS meeting, Gust reported that his team can start with a compound called pyruvate and add the carbon from a CO2 molecule to make oxaloacetate. Gust believes he and his team may eventually be able to coax their tiny refineries to regenerate complex hydrocarbons such as those found in gasoline from simple sunlight and CO2. If so, nanorefineries may one day be the key to powering both the nanoworld and the world at large.

  19. Cantilever Tales

    1. Alexander Hellemans*
    1. Alexander Hellemans writes from Naples, Italy.

    Building even the simplest nanomachines is a daunting challenge, but working models serve as springboards to grander designs. A classic example is the cantilever, an indispensable cog in the nanoworld that ushered in the scanning probe microscopy revolution. Today cantilevers, which resemble tiny diving boards, are the operating principle behind a host of experimental devices that could debut in the next decade.

    Nanosized cantilevers earned their claim to fame in the mid-1980s with the invention of atomic force microscopes (AFMs). To chart the surfaces of molecules, AFMs run the tip of a cantilever across an object under investigation; intermolecular forces between probe and object tug the cantilever tip up and down over the surface, like the stylus of a record player. A reflected laser beam records this motion, and the signal can be converted into an image of the surface.

    Cantilevers are proving to be more versatile than anyone imagined. Perhaps it's no surprise that a pioneering center for manipulating these tiny tools is the IBM Zurich Research Laboratory in Switzerland, the birthplace of scanning probe microscopy. “The whole field was started right there,” says Naomi Halas, a specialist in applying nanotechnology to chemistry at Rice University in Houston. One master cantilever builder at IBM Zurich is James Gimzewski, leader of the Nanoscale Science group. He and his team set the pace for the rest of the field, Halas says: “When they publish something, it is usually the first and best for a long time.”

    The key to the next generation of cantilever devices is being able to make the miniature planks bend on demand. One approach is to coat the top surface of an AFM cantilever, a blade of silicon about 500 nanometers long and 100 nanometers wide, with short DNA chains called oligonucleotides. The researchers next expose these coated cantilevers, which are in solution, to oligonucleotides with a complementary sequence of base pairs. When the matched pairs bind, they exert an intermolecular force that expands the coating, bending the cantilever downward much as the bimetallic strip in a thermostat curls in response to temperature changes.

    A scanning laser can measure the extent to which the oligonucleotide pairs bend the cantilever; the more base pairs that match, the more the cantilever bends. Thus coated, cantilevers might serve as sensitive probes for specific DNA sequences. “We were able to detect a single [base-pair] mismatch,” says Gimzewski, whose team described its advance in Science (14 April, p. 316). This proof of principle has attracted attention from biotech companies, which view cantilever setups as potential rivals to DNA arrays for searching for genetic sequences of interest, including disease genes. “We are now trying to make [coated cantilevers] into a general-purpose diagnostic technique,” Gimzewski says. “This is a new area. There aren't many sensitive tools around.”

    But cantilever-based devices need not be constrained to having the action—or molecules—come to them. They might be used as smart gates that release drugs or other chemicals in response to precise molecular signals. For instance, an anticancer pill equipped with cantilever gates might unleash a powerful drug at the site of a tumor only when a tumor-specific protein gloms onto a specially tailored molecular adhesive coating the cantilevers. Or a chemical for cleaning up a hazardous spill might be stored in pellets and released only when the target pollutant tugs at a cantilever gate, “rather than putting chemicals all over the place,” Gimzewski says.

    Closer to reaching the market, however, are cantilevers for computer data storage. In a project called “Millipede” spanning several labs at IBM Zurich, scientists are testing an array of about 1000 cantilevers as a new way of building nanoscale memory devices. Piezoelectric signals would tell the cantilevers when to jab their hot tips at a polymer film. The impressions in the film would record the data in a much denser format than current media do. “That has the potential to displace magnetic technologies,” Gimzewski says. That bold prediction, no doubt, will be heeded in the nano community: Gimzewski's group has a track record for coming through. “I always aim very high and fail a lot of the time,” he says, “but the few things in which I succeed make an impact and are extremely enjoyable.”

  20. NanoManipulator Lets Chemists Go Mano a Mano With Molecules

    1. Mark Sincell*
    1. Mark Sincell is a science writer in Houston.

    Interacting with the nanoworld is like shadowboxing in the dark. Because objects only a few molecules across are too small to be seen or touched directly, scientists approach them essentially blind and numb. Now a team of physicists, chemists, biologists, and computer scientists at the University of North Carolina (UNC), Chapel Hill, has developed a tool that restores their eyes and fingers. It's called the nanoManipulator.

    “It is like the movie Honey, I Shrunk the Kids, except you don't really get smaller,” says UNC computer scientist Warren Robinett. “We reconstruct your perception so that you see and feel exactly what you would if you were the size of a virus.” Robinett created the nanoManipulator with chemist Stan Williams, who is now the director of basic research in the physical sciences at Hewlett-Packard. The device “puts humans in the loop in a very nice way,” says Ari Requicha, an electrical engineer who works on virtual reality interfaces at the University of Southern California in Los Angeles.

    Robinett and Williams, who have been friends since their undergraduate days in the early 1970s, came up with the idea for the nanoManipulator during a 1991 phone call. At the time, Williams was trying to string single silicon atoms into a nanoscale wire, but he was frustrated by his inability to touch the atoms. “Chemists want to get their hands on stuff,” he says. For his part, Robinett was looking for a safer application of his expertise in programming people-sized robots that mimic the motions of their human operator. When working with big machines, Robinett explains, “a bug in your program can turn a robot into an eggbeater, and it can punch a hole in your skull.”

    To Robinett, the scanning probe microscope (SPM) that Williams used to view his surfaces looked like a small and safe robot. Instead of the robot's TV camera eyes, the SPM has a computer-controlled probe that “looks like an upside-down pyramid at the end of a flexible diving board,” says UNC computer scientist Russell Taylor. The probe skates across the silicon surface, and a computer interface converts the probe's wiggles into an electric signal, a bit like the way a phonograph needle creates sounds from the bumps in a vinyl record.

    The probe can also be programmed to push against the surface like a robotic finger. In the push mode, scanning tunneling microscopes (STMs) are commonly used to photograph products of chemical reactions, measure the mechanical properties of various materials, and rearrange atoms and bend carbon nanotubes. But there's a problem. To point the tip in the correct direction, researchers first scan the surface and find the target in the resulting three-dimensional (3D) image. Then, they must switch from visualization mode to manipulation mode, program the STM tip to move to the right spot, and finally press it against the surface. In the meantime, thermal vibrations of the surface might have bounced the atom away from the tip's preprogrammed target. It is like trying to play blindfolded billiards during an earthquake.

    But even in the push mode, the changing separation between the flexing tip and its fixed mount creates an electric current that is proportional to the pressure exerted on the tip. By transmitting that current to the proper computer interface, Robinett and Williams realized that a human could locate the target object by touch at the same time they were pushing gently against it. All Robinett had to do was revamp his human-sized robot control programs to link the microscopic “robot” to a human. And the nanoManipulator was born.

    In its current form, the nanoManipulator is a computer program that fuses an STM with a real-time 3D graphics rendering program and a haptic interface that fits over one finger like a high-tech thimble. The scientist's fingertip gets a little push each time the probe hits a bump. And when the scientist pushes back with his finger, a nanoscale finger presses against the surface.

    “The key to the manipulator is that it immerses users in the environment so they develop a good feel for what they are doing,” says electrical engineer Joe Lyding, an expert in molecular computing and visualization at the University of Illinois, Urbana-Champaign. For example, a user can tell the difference between signal noise and real texture by simply running a “finger” over the surface, says Williams, who has also used the nanoManipulator to “nanoweld” atoms into a wire strand.

    Garrett Matthews, a graduate student in physics at UNC, is using the nanoManipulator to figure out if a virus feels more like a cue ball, a tennis ball, or a rotten tomato. The results are inconclusive. “Right now it feels like a cue ball, but we have other measurements that indicate it should be soft and sticky,” says Matthews. Versions of the nanoManipulator are also being used by chemists and materials scientists at Catholic University of Leuven in Belgium, the University of Toronto, the National Institute of Standards and Technology, and Arizona State University in Tempe.

    Williams is not surprised at the device's increasing popularity. “We used to have to stare for hours at a black-and-white picture of a surface just to tell what was up and what was down,” he says. “The nanoManipulator has untied our hands and opened our senses.”

  21. Strange Behavior at One Dimension

    1. Dennis Normile

    TOKYO—For Kunio Takayanagi, a physicist at the Tokyo Institute of Technology, thinner is better. Takayanagi has calculated that electrons should pass through the 1-nanometer gold wires he has crafted at speeds several orders of magnitude faster than those at which they pass through larger wires. If such wires could be fashioned into circuits, they could set the stage for even faster supercomputers. “In electronic device technology,” he says, “the speed of the electron is the most important thing.”

    Such high speeds are made possible by the internal structure of the nanowires through which the electrons pass. “At larger scales, materials form crystals,” explains Erio Tosatti, a theorist at the Institute for Theoretical Physics in Trieste, Italy. “In the nanowires, the material is not a crystal. It is very different, electrically and mechanically.”

    Takayanagi was the first to determine this structure by putting a miniaturized scanning tunneling microscope (STM) within an ultrahigh-vacuum, high-resolution transmission electron microscope (TEM). By irradiating a thin gold film with an electron beam, he reduced it to a wire. Imaging with the TEM and the STM revealed that when the wire was thinned to a diameter of roughly 1 nanometer, atoms organized themselves into nested tubes, with the atoms in each tube arranged in a helix coiled around the wire axis. The structure is akin to that of carbon nanotubes.

    Takayanagi's prediction of the speed of electron transport is based on some preliminary conductance measurements and theory. Theory suggests that the electrons would move so efficiently that no heat would be generated. Groups at Nagoya and Osaka universities in Japan and at Leiden University in the Netherlands have produced similar wires and plan to measure some of their mechanical and electrical properties.

    In addition to the obvious advantages for the electronics industry, the work has important implications for basic science. Pointing to the helical structure of carbon nanotubes and the double helix of DNA, Takayanagi says it's possible that “all material will take on a stable helical structure if it is one-dimensional like a nanowire.” Tosatti is equally excited. “I think this work could lead to an understanding of how matter spontaneously organizes itself at the nanoscale,” he says.

Log in to view full text

Log in through your institution

Log in through your institution