News this Week

Science  07 Apr 2000:
Vol. 288, Issue 5463, pp. 22

    Study of HIV Transmission Sparks Ethics Debate

    1. Gretchen Vogel

    A study on AIDS transmission in Uganda has reignited a smoldering debate on the ethics of research in developing countries. Last week, The New England Journal of Medicine published a paper reporting that HIV-infected individuals are much less likely to pass their infection on to a sex partner if they carry a low number of viral particles in their blood. But in an unusual move, the journal also published a strongly worded editorial questioning the study's ethics.

    The journal's editor, Marcia Angell, objected to the fact that the study did not offer participants treatment for HIV infection and uninfected individuals were not told that their partners were HIV-positive. The study's authors counter that the ethical requirements Angell advocates would make research impossible in the developing world, where 90% of HIV-infected people live.

    Such debates are not new. Scientists have struggled for several years with the ethics of HIV studies in countries where expensive and complex therapy is unavailable. The United Nations Programme on HIV/AIDS (UNAIDS) issued a so-called guidance document on conducting AIDS vaccine trials in late February without reaching a clear consensus despite months of debate (Science, 3 July 1998, p. 22). The document recommends that participants in trials receive care and treatment for HIV and its complications, “with the ideal being to provide the best proven therapy, and the minimum to provide the highest level of care attainable in the host country.” The U.S. National Bioethics Advisory Commission (NBAC) is also trying to come up with recommendations for U.S.-sponsored studies.

    The paper published last week is a retrospective analysis of a study, conducted from 1994 to 1998, designed to determine whether intensive treatment of sexually transmitted diseases (STDs) such as syphilis and chlamydia might help to prevent HIV transmission. A research team led by public health physician Maria Wawer of Columbia University, with colleagues from the United States and Uganda, tested more than 15,000 adults in 10 rural communities for infection with HIV and other STDs. The team then mass-treated study participants in five communities with antibiotics, whether or not they showed STD symptoms. Participants in five control communities were told of their test results and were referred to free clinics, which the researchers stocked with antibiotics. But they did not automatically receive doses of drugs. When a preliminary analysis of the data showed that mass treatment with antibiotics lowered the rate of other STDs but did not influence HIV transmission, the researchers ended the study and gave all participants the antibiotic therapy. The team reported the initial results last year in The Lancet.

    In an attempt to find out if other factors besides antibiotics made a difference in transmission, the researchers went back to their records and matched up sexual partners, something they had not done during the study period. They identified 415 long-term couples in which, at the beginning of the study, one partner was infected and the other was not. Ninety of the uninfected partners became infected during the study. After analyzing a variety of factors, including age, the presence of other diseases, travel history, and the number of sexual partners, the researchers concluded that the most significant factor was the amount of virus in the infected partner's blood. The chance of transmission doubled with every 10-fold increase in viral load.

    Angell was especially troubled that the researchers left it up to HIV-infected participants to inform their at-risk spouses or partners. The fact that the researchers identified the discordant couples only after the study ended does not mitigate their responsibility, Angell says. The team had the data, and “if you have an identifiable partner, you owe it to that partner to see that they are informed,” she says. However, co-author Ronald Gray of The Johns Hopkins University says Ugandan national policy prohibits a health worker from telling a third party about an individual's HIV status. All HIV-positive participants were encouraged to tell their partners, he says. The team also provided free condoms and conducted HIV awareness and prevention campaigns in all the villages. In addition, personal and couples counseling was freely available, Gray says.

    Angell and Peter Lurie of the consumer-advocate group Public Citizen in Washington, D.C., are both troubled that study participants with HIV were left untreated. They argue that researchers should provide the best available treatment to their subjects even if such care is not usually obtainable where the research is conducted. But Wawer and her team say that the cocktail of antiretroviral agents in use in wealthy countries would be impossible to distribute in rural Uganda, no matter the cost. The complicated regimen involves taking pills at regular intervals and requires close monitoring by a clinician. That is simply impossible in villages without medical services of any kind, she says. Edward Mbidde, a medical oncologist at Makerere University in Kampala, Uganda, says that holding all studies in the developing world to the same standard of care available in wealthy countries would make such research impractical.

    But Lurie disagrees. “If you're able to pull off the study, you should be able to pull off administering medications to people,” he says. Drug companies might have been willing to donate the drugs for free, he says, and a different study design “might have produced useful information about the ability of people in rural Africa to take these drugs.”

    The debate is unlikely to be resolved soon. NBAC will continue to hear testimony from panels of international researchers, and it plans to issue draft guidelines by early summer. But it faces a tough task: “In seven meetings around the world, we were simply not able to get a consensus” on what treatment should be provided for HIV-infected participants in poor countries, says Barry Bloom of Harvard University, who headed the UNAIDS Vaccine Advisory Committee that drew up the recently issued guidelines. He says researchers attempting to design ethical trials need to ask themselves, “Even if you can't provide antiretrovirals, can you do better than nothing?” It's a question that all parties agree desperately needs an answer.


    Five Researchers Die in Boating Accident

    1. David Malakoff

    A spring-break research trip ended last week in a disaster that left the tight-knit world of professional ecologists mourning the loss of five of its own. The scientists—two Americans from the University of California (UC), Davis, and two Japanese from Kyoto University—died after their boat capsized in high seas off Baja, Mexico. A third Japanese scientist was missing and presumed dead as Science went to press.

    The victims of the changeable weather in the Sea of Cortez were expedition leader Gary Allan Polis, 53, a spider and scorpion expert at UC Davis; Michael Rose, 28, a postgraduate researcher in Polis's lab; termite ecologist Takuya Abe, 55, of the Center for Ecological Research at Kyoto University; and junior colleagues Masahiko Higashi, 45, and Shigeru Nakano, 37.

    Polis's 17-member team set out around noon on 27 March in two small boats for a 6-kilometer return voyage from a study site on the island of Cabeza de Caballo to the isolated Mexican port of Bahía de los Angeles. The vessels became separated in windswept seas during a sudden storm. Polis's boat, which carried nine people, capsized about 500 meters offshore, survivors say. Polis apparently died of a heart attack after clinging to the swamped craft for several hours, while the other victims drowned attempting to swim to shore.

    Occupants of the second boat—which carried members of a science tourism group from the Earthwatch Institute of Maynard, Massachusetts—returned to search for Polis's boat after it failed to appear. At 10:30 p.m., they reported the disappearance to Mexican authorities, who began an extensive search that eventually led to the recovery of the bodies.

    The accident claimed the lives of both prominent practitioners and younger academics just beginning to make their mark. Polis, whose work on insects had been highlighted in popular magazines and even a children's book, had won the Ecological Society of America's Aldo Leopold Award and more than $500,000 in grants from the National Science Foundation (NSF) over the last decade. Abe presented a lower profile, but he was well known in his field for studies of the complex cooperative relationships between termites and plants. NSF director Rita Colwell issued a statement praising all five scientists as “examples of courage” who “put their commitment to knowledge before their comfort and personal security.”


    New Extrasolar Planets Hint at More to Come

    1. Charles Seife

    The planet hunters have done it again. On 29 March, NASA announced that astronomers at the University of California, Berkeley, and the Carnegie Institution of Washington had bagged two new planets that circle other stars. Less massive than Saturn, the objects are the smallest extrasolar planets yet found—proof that astronomical techniques are now sensitive enough that scientists could spot our own solar system from afar. The discovery has sparked hopes that glimpses of even smaller planets, Uranus-sized and under, are soon to come.

    “They're pushing and pushing and pushing,” says Heidi Hammel, an astrophysicist at the Space Science Institute in Boulder, Colorado. “They'll probably be able to push down to Uranus's mass,” she says—possibly within a year.

    Over the past half-decade, the discoverers—Geoff Marcy at Berkeley and Paul Butler at Carnegie—have found roughly two-thirds of the 30 or so planets known to orbit distant suns. Because the light coming from those gassy planets is feeble compared to their parent stars' brilliance, they are nearly impossible to see with a telescope. Instead, Marcy and Butler detect them indirectly, by studying how they affect the stars they orbit.

    Thanks to gravity, a planet and a star tug on each other with an equal and opposite force. As the planet pulls on the much more massive star, it causes a small but noticeable wobble in the star's motion. Due to the Doppler effect, this wobble appears as a subtle variation in the star's color as it gets redshifted, blueshifted, and redshifted again.

    To detect those changes, the planet hunters use a sensitive spectrometer. Before the light enters the instrument, it passes through a cell full of iodine vapor, which absorbs some of it, superimposing dark lines upon the spectrum at well-known wavelengths. From the way the spectrum shifts relative to that standardized grid, the scientists can get a precise measurement of the motion of the star. By charting stellar motions in a database of over 1000 stars, Marcy and Butler have found a score of planets, each about the size of Jupiter or larger. Naturally, the smaller the planet or the more distant its orbit, the weaker its tug on its mother star—and the subtler the corresponding wobble. Because of this, Marcy and Butler had not been able to detect planets smaller than about half of Jupiter's mass—until now.

    To detect fainter wobbles, Marcy says, the astronomers beefed up a computer program that corrects the “idiosyncrasies” of their equipment at the Keck Telescope on Mauna Kea, Hawaii. “Up until 1 year ago, the precision we could measure in stars was plus or minus 8 meters a second,” he says, noting that the equipment can now pick out wobbles with a precision of 3 meters a second.

    Within months, the newly honed equipment had spotted two planets smaller than Saturn, each roughly a third of Jupiter's mass. The first orbits the star HD43675, located 109 light-years from Earth in the constellation Monoceros, with a period of 3 days; the second orbits the star 79 Ceti, 117 light-years away in the constellation Cetus, with a period of 74 days. By detecting such small planets—particularly the one around 79 Ceti, with its larger orbit—Marcy and Butler have shown that they would be able to spot a twin of our solar system, with a Jupiter-mass planet fairly distant from its star: 79 Ceti's planet sets it wobbling at 11 meters per second, just a shade less than the 12-meters-per-second wobble Jupiter causes in the sun.

    Although it's risky to extrapolate from such a small sample, the newcomers hint that big, gassy planets come in an unbroken range of sizes, says Carnegie Institution astrophysicist Alan Boss. “It suggests that there is a continuous distribution of masses” from relatively rare super-Jupiters to fairly common sub-Saturns and below, Boss says. “What we're seeing is really just the tip of the iceberg.”

    The planets' masses aren't known precisely. The Doppler effect reveals only motion toward us or away from us; side-to-side motion does not affect the color of starlight. Thus, if the orbit of a planet is sharply tilted with respect to our view of the star, astronomers on Earth would detect only part of the star's wobble and would underestimate the planet's mass. For that reason, the two new planets' masses may be larger than announced. But Butler thinks it's unlikely that scientists would greatly underestimate the masses of both planets, as well as others that the astronomers have hinted at but haven't yet unveiled.

    Marcy and Butler think that they can refine their technique by another factor of 3, according to Hammel. If so, she says, they should soon be turning up planets about the mass of Uranus, a mere twentieth of Jupiter's. To get much beyond that, however, they will need space-borne instruments such as the ones slated for NASA's Space Interferometry Mission in 2006. “We'll be out of business in 10 years” when it starts working, Marcy says. But until then, says NASA scientist Anne Kinney, there are plenty of planets out there waiting to be discovered. “This is brand-new,” she says. “We're going to learn what kind of animals are in that zoo.”


    AIDS Research Head to Retire From NIH

    1. Jon Cohen

    The widely respected, hyperkinetic overseer of the $2 billion AIDS research program at the National Institutes of Health in Bethesda, Maryland, announced last week that he will retire from NIH on 1 September. “That's my 73rd birthday, and the family said, ‘You've paid your dues and it's time to come home,’” says Neal Nathanson, director of the Office of AIDS Research (OAR).

    Nathanson, a renowned viral epidemiologist, was coaxed from his longtime lab at the University of Pennsylvania in Philadelphia (where he still keeps his home) in July 1998 by Harold Varmus, then director of NIH. Nathanson had little AIDS experience, but he threw himself into the job with the enthusiasm of a graduate student. “Part of the reason he was able to accomplish the things he did is because he was not an insider,” says Philip Greenberg of the University of Washington, Seattle, an HIV immunologist who sits on the OAR council. “He had no vested interests, and he didn't have a career to extend.”

    Greenberg and others credit Nathanson with fostering cooperation among NIH institutes, boosting the AIDS vaccine research budget, better coordinating primate research, and rescuing an endangered HIV-specific “study section” that reviews outside grant applications. Nathanson says dealing with the tensions among the various institutes presented him the greatest challenge of all. “The institute directors are much too powerful, and the NIH director is much too weak,” Nathanson says. “The institutes do not play well together. And I've done a lot of behind-the-scenes negotiations.”

    Nathanson plans to return to the University of Pennsylvania, and he's interested in working in the AIDS vaccine area, at Penn or elsewhere. “Who knows what will come along?” he says. NIH has not yet formed a search committee to find his replacement. “It's going to be tough filling his shoes, I'll tell you,” says Anthony Fauci, head of the National Institute of Allergy and Infectious Diseases.


    Jury Awards $545,000 in Stanford Case

    1. Robert F. Service

    A federal jury last week ordered Stanford University to pay $545,000 to a former medical informatics researcher who was laid off 3 years ago after alleging sex discrimination on the job. The researcher is also one of several women whose complaints have triggered an ongoing Department of Labor investigation into the university's affirmative action policies.

    Yesterday's decision involves Colleen Crangle, a computer sciences expert who worked in the department of medicine at Stanford's medical school. In a suit filed in U.S. District Court in October 1997, Crangle alleged that she was let go in March 1997 with one day's notice because she complained about the way she was being treated by male colleagues—specifically, about a set of restrictions imposed on her activities as a researcher. The jury ruled that Stanford had acted “with malice” toward Crangle, a part-time senior research scientist who did not hold a formal faculty position and who worked on a series of projects.

    The verdict does not address directly the issue of sex discrimination. Judge James Ware threw out a discrimination claim in Crangle's suit in a summary judgment last fall. But the basis for the jury's awarding her damages is its finding that Crangle had a valid reason to feel discriminated against, and the larger issue is clearly on the minds of both parties. “I think [the verdict] sends a real message to Stanford that they can't overlook these cases,” says Dan Siegel, Crangle's lawyer. Despite persistent complaints, he says, “Stanford really has turned a blind eye” toward allegations of sex discrimination.

    Debra Zumwalt, Stanford's acting general counsel, disagrees that the university has ignored the issue or acted improperly. An internal review, she says, found that salaries and tenure rates for women faculty members are on par with those for men. “It's very frustrating that there is a vocal minority who give the impression that there is a persistent problem,” she says. During the trial, Stanford's lawyers argued that Crangle's superiors went out of their way to help find her work when money ran out on the project she was working on. “Crangle's position was explicitly made contingent upon continued outside funding, and that funding ran out,” says Zumwalt. “Obviously, we are disappointed with the jury's verdict” and plan to appeal the case, she adds.

    One of the strongest pieces of evidence introduced to buttress Crangle's case, says Siegel, was a series of e-mails. In one, written in December 1996, Medical Infometrics director Mark Musen discusses Crangle's complaints with Edward Shortliffe, the associate medical school dean, and then states, “I'd like to see what options we have right now simply to lay her off.”

    In its unanimous verdict, the eight-member jury awarded the maximum amount allowed under federal law in such cases. Crangle sees the verdict as vindication of her complaint that, despite her qualifications, she was required to serve as a “girl Friday” to male colleagues. At the same time, Crangle says that if given the opportunity, she would reclaim her job. “I'm tired of seeing good women leave and be forced out,” she says. “The only way it will change is if I, and people like me, stay and work to make it better.”

    The verdict comes as the U.S. Department of Labor is investigating charges that the university has systematically violated rules involving the hiring and promoting of women employees. Because Stanford receives grants and contracts from the federal government, it is required to adhere to federal policies that prohibit discrimination on the basis of race, color, religion, sex, or national origin. The complaints were brought by current and former Stanford employees—a group that numbered as many as 32 in February 1999. This winter the government provided Stanford with the names of nine women, Zumwalt says, including Crangle. University officials say they have nothing to hide: “We have zero tolerance for discrimination and retaliation, and [we have] strong policies that prohibit such behavior. And we enforce those policies,” says Zumwalt.


    UCSF Researchers Leave, Charging Bias

    1. Marcia Barinaga

    A prominent research couple has decided to leave the University of California, San Francisco (UCSF), for tenured jobs at another UC school after accusing the university of sex discrimination. UCSF officials deny any bias or wrongdoing, and some scientists say the real problem is the vulnerability of adjunct faculty members—a problem that isn't confined to UCSF.

    The departure this summer of Nelson Freimer, a key member of UCSF's human genetics program, and his wife, biomathematician Sally Blower, for UC Los Angeles will mark the end of a stormy 5-year relationship between Blower, an adjunct professor, and UCSF. Blower says that powerful male faculty members have humiliated her in a variety of ways, for example by forcing her to beg for permanent work space and shuttling her among departments and temporary space assignments. “If they think this is the correct way to treat women,” says Blower, “I find it offensive. I don't want to be at this kind of institution.”

    Freimer, who joined the UCSF faculty in 1990 and whose work on isolating human disease genes has been integral to UCSF's new human genetics program, supports her claims. “My faith in the values of the institution has been repeatedly shaken by my witnessing Sally's treatment here over the past several years and has been utterly destroyed by her experiences over the past several months,” he wrote in a letter to UCSF Chancellor J. Michael Bishop in early February. Blower has received a position as a full professor in the department of biomathematics at UCLA, and Freimer will direct a new center for neurobehavioral genetics.

    UCSF officials dispute Blower's allegations. In a statement on 23 March, Bishop said that a 1-month inquiry by a UCSF associate dean into Blower's allegations found “no evidence of institutional sexism, gender discrimination, sexual harassment or professional misconduct directed against Dr. Blower.” But at least one administrator agrees that the status of women in medical schools in general needs improvement. “The quality of academic environment for women … is not ideal in academic medical centers anywhere in this country,” says Diane Wara, associate dean for minority and women's affairs at UCSF.

    Blower, 42, completed a postdoc 10 years ago with Robert May at Oxford University, then worked at the UC Berkeley School of Public Health before taking a position as adjunct professor at UCSF in 1995. Over the past decade she has published a steady stream of articles in high-profile journals, including Nature Medicine and Science, modeling the transmission dynamics of infectious diseases. As a theoretician, Blower's space needs upon arriving at UCSF were modest: an office for herself and one for her postdocs. She had her own grant to pay salaries and research expenses, and she says she was not “looking for red-carpet treatment,” but just wanted to be left “alone with a few postdocs [to] do some excellent research.”

    Instead, in an e-mail circulated in February to colleagues, she complains of mistreatment by a group of faculty members, whom she calls “the Senior Boys.” The experience, she writes, left her feeling “powerless and voiceless” against “a vicious brutal sexist system, run by a bunch of bullies.”

    In a written response to questions from Science, the university said that Blower “was not repeatedly evicted from space. On the contrary, extraordinary efforts were made to accommodate her requests,” including the offer of premium space at the Parnassus Heights campus and salary support when she was between grants. “Sally was not picked on uniquely,” says Wara. “Space is precious here, and all of us have to be flexible.”

    Blower is neither suing UCSF nor filing a grievance. Her intention, she says, is to “shine the light … on the status and treatment of women at UCSF.” She says many female UCSF faculty members support her but have remained silent “based on a fear of retaliation.” But others vigorously deny that UCSF is a notably sexist place. “In the basic science departments at UCSF, I firmly believe there is about as little sex discrimination as anywhere in the world,” says geneticist Cornelia Bargmann. “One of the reasons I came to UCSF was because I knew that was true. It was clear you could do well here.”

    Some attribute Blower's dissatisfaction to the fact that she has been a highly accomplished scientist serving in an adjunct position. And Bargmann notes that although her female tenure-track colleagues are thriving in the basic science departments at UCSF, the situation is very different for adjunct professors at most U.S. medical schools. “You look around and you see they are not treated well,” she says about this group, often women, whose soft-money, non-tenure track positions give them little clout. “People feel that they are doing them a favor by giving them a position at all.”

    Wara acknowledges that assisting women in a historically male-dominated system and protecting the rights of adjunct faculty are more difficult than achieving such numerical goals as increasing the ranks of women faculty and providing pay equity. “We have tried for over a decade to put in place strategies to at least diminish the power differential” that female faculty members experience, she says, but “we still have a long way to go.”

    The university began an inquiry into the status of women before Blower made her charges. It is designed, according to a statement, to “research the issues and get beyond the numbers.” A good place to start, say some researchers, may be the concerns of women in adjunct faculty positions.


    Hazel Trees Offer New Source of Cancer Drug

    1. Robert F. Service

    San FranciscoTaxol is a potent and popular cancer drug, but it is harvested from the needles of an endangered tree, and demand for the drug could outpace the trees' productivity. Last week, researchers here at the American Chemical Society's semiannual meeting announced that they have isolated the compound from hazelnut trees and fungi, a finding that could lead to an abundant new source of the drug and possibly lower its cost.

    Generically known as paclitaxel, Taxol is one of the best selling cancer drugs worldwide. It is used to treat ovarian and breast cancer, and many breast cancer survivors take the drug to prevent recurrence of the disease. For now, there's enough paclitaxel to go around, but demand could soon grow: Investigators are testing the drug's power over other cancers, Alzheimer's disease, and multiple sclerosis, among others. If those uses pan out, supplies might become scarce. That's because paclitaxel is made by modifying a precursor compound extracted from the needles of the Pacific yew, an endangered tree that grows along the coast of the Pacific Northwest.

    Angela Hoffman, a chemist at the University of Portland, Oregon, had previously looked for ways to boost paclitaxel production in yew trees. To her surprise, she found a new source of the compound while working on a completely different project. She and her colleagues were studying hazelnut trees to see why some were more susceptible than others to Eastern filbert blight, which is devastating hazelnut groves in Oregon's Willamette Valley. The researchers prepared extracts from several types of hazelnut trees, and after purifying and analyzing the samples, Hoffman noticed the familiar chemical signature of paclitaxel.

    Hoffman and her colleagues determined that hazelnut trees make paclitaxel in their leaves, twigs, and nuts, although only at about 10% of the concentration in yew trees. They also found that fungi living on hazelnut trees produce paclitaxel.

    Down the road, it's the fungi that could be the most valuable find, says David Houck, a natural products expert at Phytera, a drug company in Worcester, Massachusetts. Paclitaxel-producing fungi have also been isolated from yew trees, he says. If a fungus could be coaxed into churning out the drug in vats, “it would definitely have value,” Houck says.


    Lee to Remain as Academy President

    1. Dennis Normile

    Scientists and staff at Taiwan's Academia Sinica are breathing a sigh of relief this week after institute head and Nobel laureate Lee Yuan-tseh decided to remain at the academy rather than join Taiwan's new administration.

    Lee, who has become a major political figure since returning to Taiwan in 1994 to head the academy, endorsed Chen Shui-bian for president just a week before Chen won a narrow plurality in the 18 March election. It was a controversial move, partly because the head of Academia Sinica reports directly to the president of Taiwan. Lee then tendered his resignation, which was rejected by outgoing president Lee Teng-hui, and went on a 2-week vacation. When Chen won, he announced that Lee was his first choice for premier.

    The possibility that Lee might join the new government left a cloud of uncertainty hanging over Academia Sinica, a collection of 24 institutes that represent the island's premier research efforts. Lee is credited with garnering increased financial support, reforming advancement and research procedures, and recruiting leading scientists to key positions. But on 29 March Lee announced he would turn down Chen's offer and stay on as head of Academia Sinica, although he said he has agreed to serve as a presidential adviser.

    Staff are glad to have him back. “We're really happy,” says Sheng-Hsien Lin, director of Academia Sinica's Institute of Atomic and Molecular Sciences. “We think he can do much more [for Taiwan] in science than in politics.”

    Although the immediate question of the academy's leadership is resolved, its long-term political status remains unclear. Lee may have alienated Taiwan's National Assembly, which is still controlled by the long-ruling Kuomintang, the Nationalist Party, by criticizing its policies and endorsing Chen, a member of the Democratic Progressive Party. Some observers also wonder if Lee's support for the progressive party, the most pro-independence group in Taiwan, might affect efforts to build stronger scientific relations with China.


    Chemical Tags Speed Delivery Into Cells

    1. Robert F. Service

    San FranciscoFor pharmaceutical-makers, trying to get a drug inside cells can be as difficult as meeting the Rolling Stones: You might score tickets to a Stones concert, but to party with Mick Jagger you need a backstage pass. In the case of pharmaceuticals, companies must make drugs water soluble to pass through the bloodstream on the way to their targets. Yet once the compounds have arrived at their destination, they need a very different kind of chemistry to dissolve through the fatty membrane surrounding cells. Often drugs can manage one task but not the other. But a team of California researchers may have found a way to change all that.

    At a meeting of the American Chemical Society (ACS) here last week,* team leader Paul Wender of Stanford University reported that he and his colleagues have discovered a small chemical tag that appears to act as a universal pass, allowing compounds access to the interior of cells. The cellular pass is a short protein fragment, or peptide, made up of a repeating series of up to nine arginine amino acids. At the meeting, Wender, an organic chemist, reported that when the arginine peptide is linked to a variety of different drugs, it ferries its cargo into cells at unprecedented rates. When the team hooked the peptide to the powerful immunosuppressant cyclosporin, for example, the drug passed right through human skin grafted onto a mouse—an impossible feat without the peptide.

    “This is an important development,” says John Voorhees, a dermatologist at the University of Michigan Medical School in Ann Arbor. When physicians use cyclosporin to treat skin conditions, they give the drug in capsules in hopes that some of it will make its way from the gut to the bloodstream and eventually inside skin cells. A topical cream could be more effective for treating conditions such as psoriasis and eczema, and it might carry fewer side effects, Voorhees says. “There's nothing else like it” for getting compounds into cells, he states. In fact, the new transporter group has proven so successful that the Stanford team has created a company called CellGate to commercialize the technology.

    The new peptide is far from the first molecule researchers have tried to use as a chemical pass card. Researchers have long known that positively charged, or cationic, peptides and synthetic polymers make decent access keys. But progress toward a universal key has been mixed. Recently, help has come from a surprising source: the AIDS virus. In the early 1980s, researchers discovered that a protein fragment called Tat helps HIV viral proteins enter cells and initiate synthesis of RNA. And researchers at the Massachusetts Institute of Technology and elsewhere went on to show that linking HIV Tat to drugs can boost their uptake as well.

    Unfortunately, HIV Tat is so complex and hard to synthesize that it is too expensive for widespread use, Wender notes. So he and his colleagues set out to find a cheaper, more effective alternative. They started by looking carefully at HIV Tat. Like other cell entry tags, HIV Tat is made up of cationic subunits—in this case six arginine amino acids, two lysines, and a glutamine. That structure initially seemed to confirm the conventional wisdom that a tag's positive charge is what gets it inside cells, says Jonathan Rothbard, head of research at CellGate. But when the researchers looked further, that turned out not to be the case. By testing a variety of cationic peptide chains, the Stanford-CellGate team found that peptides composed entirely of arginines were orders of magnitude more effective at infiltrating cells than counterparts made from leucines or glutamines. “So it's not just a cation story,” Wender says.

    To find out why, Wender's team synthesized short amino acid chains made from ornithine, an amino acid that differs from arginine in just one respect: It harbors a nitrogen in place of an oxygen, a change that does away with arginine's ability to form weak hydrogen bonds with its neighbors. To their surprise, the researchers found that the ornithine chains were virtually useless at shuffling drug cargo into cells, suggesting that arginine's ability to form hydrogen bonds is the key. And as it turns out, that hydrogen-bonding capability is a talent leucines and glutamines don't share.

    Just what the peptides bond to and how polyarginine wends its way into cells are still mysteries. But whatever the mechanism turns out to be, it is clearly effective. At the ACS meeting, Wender reported that his team has used polyarginine tags to spirit drugs such as cyclosporin and Taxol into cells, and they are working to extend the method to other compounds. Apparently the new tags and their cargo don't just diffuse across cell membranes, Wender says; rather, it looks as if cells actively pump them inside.

    In one sense, in fact, the peptide may be too effective. “It works with every cell type we've looked at,” Rothbard says. That could make it difficult to target drugs only to particular cells such as cancerous ones. For that reason, Wender says that he and his colleagues are initially focusing on linking their tag to drugs that can be applied locally, such as topical creams to treat skin diseases. Still, even if that's as far as they get—and that seems doubtful—a new access key for getting drugs into skin cells could make a profound difference for patients suffering from psoriasis and other chronic skin conditions.

    • *219th ACS national meeting, 26-30 March.


    Dinosaur Docudrama Blends Fact, Fantasy

    1. Andrew Watson*,
    2. Erik Stokstad
    1. Andrew Watson writes from Norwich, U.K.

    Amid the majestic sequoias of what could be a state park in Northern California, the silence is broken by an unearthly, guttural bellow. An enormous beast plods across the television screen. She kicks out a shallow nest and begins to lay her eggs. Each white egg, the size of a soccer ball, slides gently down an ovipositor and comes to rest in the ground. Then, as a velvet-voiced narrator intones about the dangers that await the young hatchlings-to-be, the giant scrapes soil over the clutch and abandons her brood to their fate.

    It looks and sounds just like a wildlife documentary—so much so that, if you watch long enough, you almost forget that the animals it shows have been extinct for more than 65 million years. But this is Walking With Dinosaurs, a sometimes stunning dino-extravaganza that uses computer animation and detailed puppets to resurrect the creatures and place them in real landscapes. When the $10 million program aired in the United Kingdom last fall, 17 million people—almost a third of the population—tuned in to the six weekly installments, making it the BBC's most watched science program ever and one of its top 20 programs of all time. It also stirred up a controversy.

    Some researchers were unstinting in their praise: “This is going to stand out as one of the best dinosaur shows ever done and certainly the most novel one,” says Tom Holtz, a vertebrate paleontologist at the University of Maryland, College Park, who consulted with the BBC on the project. But others cringed at the way it blurred fact and fiction. Most of the egg-laying sequence, for example, is a screenwriter's fantasy: There is no scientific evidence that the giant dinosaur Diplodocus had an ovipositor or abandoned its young. “Some of the arguments were just so far-fetched, so ridiculous,” says Norman MacLeod, an invertebrate paleontologist at the Natural History Museum in London. “I was embarrassed for the profession.” The British media debated whether docudrama was a suitable way to convey science to the public. Would TV viewers be stimulated, misled, or just confused? On 16 April millions more will get the chance to make up their own minds as the Discovery Channel airs a revised 3-hour version of the show in North and South America.

    The idea of making a wildlife program about dinosaurs was the brainchild of BBC producer and former zoologist Tim Haines. “We used natural history grammar, and we had [the dinosaurs] doing the range of things you expect a living animal to do,” Haines says. To prepare for the show, the BBC team consulted with more than 100 experts on the Mesozoic era, then combed the globe for exotic landscapes in which to film models and puppets. Triassic scenes were shot in New Caledonia, the Jurassic episode in redwood forests of Northern California, and Cretaceous dramas in Chilean lava fields. Meanwhile, animators and scientists hashed out details of dinosaur physiology and locomotion. For information, such as crest displays and camouflage, that couldn't be gleaned from bones alone, they drew analogies from birds and other living animals. It took 2 years to animate the computer figures, paste them into the landscape footage, and add shadows.

    View this table:

    The effect can be breathtaking. In one amazing shot, the camera looks up at a Diplodocus walking overhead; the dinosaur's neck takes seconds to pass over, then huge wattles on its fleshy belly fill the screen. Elsewhere, you get a helicopter's-eye view of a herd of sauropods migrating across Utah, watch an ichthyosaur giving birth underwater, and hide in the bushes as Tyrannosaurus rex saliva splashes the camera lens. “True, it's fantasy,” says Ken Carpenter of the Denver Museum of Natural History, a show consultant. “But in this case I think it's good, because it shows dinosaurs as living, breathing creatures, not as skeletons that stand stiffly in a museum.”

    The show has its rough spots. Not all of the animated creatures are equally convincing, and despite the expert assistance, some avoidable errors crept in. The most often-cited blooper is a scene in which Postosuchus, an ancestor of the dinosaurs, marks its territory with gushing streams of urine. As living reptiles and birds don't urinate, it's a good bet their ancient cousin Postosuchus didn't, either. MacLeod says that one camp of paleontologists “was outraged at the program because of all of the factual errors and artistic license.”

    Other complaints concerned the seamless way the BBC production blended science and speculation. “I'd like to see a lot more perhapses and maybes in there,” says Karen Chin, a postdoc at Stanford University, who advised the show while a visiting scientist at the U.S. Geological Survey. MacLeod takes a harder line. “Walking With Dinosaurs is a work of fiction; it's a work of creative fantasy,” he says. “It's not a work of science; it's not a documentary.”

    But Holtz argues that anything in paleontology—other than drawing the bones in the ground—involves some degree of speculation. And in portraying extinct animals on television, Haines says, guesswork is unavoidable. “You're trying all the time to communicate big pictures based on as much evidence as you can muster,” he says. “But in the end, if you want an animal to actually just walk across the screen, you've got to start speculating.”

    Some of the imagination perished in the trans-Atlantic passage. To make room for commercials, the Discovery Channel cut about 20% of the footage, including scenes deemed too gory for Sunday evening viewers. Discovery's producers also rescripted sections that their reviewers considered inaccurate or misleading. In addition, they deliberately ruptured the wildlife-documentary illusion by inserting sound bites from interviews with paleontologists. (For viewers who want more detail, the hourlong program “The Making of Walking With Dinosaurs,” set to air on 17 April, gives a lighthearted but informative look at the science and special effects behind the program.)

    But all these caveats could be superfluous: Haines believes that the suspension of disbelief won't last long and that nagging doubts can motivate viewers to think scientifically. “When they ask ‘How do they know that?’ they are asking a scientific question,” he says. There may be a less epistemological payoff, too. “The great value to scientists is getting people to think of these as animals living in an ecosystem,” says Jim Kirkland, state paleontologist for the Utah Geological Survey, who reviewed the script for Discovery. “We need people to see our vision. It's a wonderful way to bring this home to the public.”


    'Faster, Cheaper, Better' on Trial

    1. Andrew Lawler

    For years NASA Administrator Dan Goldin has demanded that scientists and engineers do more with less. Now several reports say that the strategy, while good in theory, was poorly implemented

    When Dan Goldin visited the Jet Propulsion Laboratory (JPL) 8 years ago as the new NASA chief, he brought a harsh message to the people responsible for the world's most successful planetary program: Evolve or face extinction. Last week he was back at the lab, nestled in the hills above Pasadena, California. But this time he had shed what he has called his “tough love” approach to management. Instead he made self-deprecating jokes about his reputation for abrasiveness, told a touching story about his 88-year-old mother, and urged employees to speak their minds.

    No bargain.

    NASA spent less on key elements of Mars '98 than on Pathfinder, despite the challenge of building two spacecraft at half the weight.


    Goldin showed that kinder, gentler face 1 day after the release of a cluster of internal and outside reports* describing ill-trained engineers, inexperienced managers, stubborn bureaucrats, and a workforce apparently paralyzed by a fear of displeasing their boss. The reports' immediate focus is NASA's handling of recent Mars missions, including last year's failures of the climate orbiter and polar lander. But the documents also raise broader questions about Goldin's take-no-prisoners approach to a concept that has been the centerpiece of the agency's space science programs for nearly a decade—“faster, cheaper, better.”

    “I pushed too hard,” Goldin told JPL employees in an uncharacteristically apologetic tone. That pressure, he added, “may have made failure inevitable.” Panel members add that agency employees at all levels were unwilling to tell their superiors that the challenges facing the Mars missions were simply too tough—and unwilling even to admit that fact to themselves. “There was a flawed process and self-deception from Goldin on down,” says one panelist. “Everyone became convinced they could do this.”

    Goldin was brought into NASA to shake up an agency plagued by cost overruns, technical glitches, and a ponderous bureaucracy. And all the recent reports—which include a National Research Council (NRC) study, three internal NASA investigations, and one NASA-sponsored external effort chaired by retired Lockheed Martin manager Tom Young—endorse his attempt to reduce paperwork, speed up mission preparation, and reward innovative solutions. But they point out severe shortcomings in the way faster, cheaper, better was carried out. In the next month a battery of congressional hearings will air these issues, while NASA scrambles to come up with a new plan for Mars exploration (see sidebar).


    Just a short time ago, Mars was the bright star in NASA's firmament. A year after tentative evidence of fossilized life on a meteorite from Mars was discovered in 1996, the Pathfinder spacecraft and rover made a daring landing—on Independence Day—encased in an airbag. The successful missions made national heroes of the JPL team at a time when budget cuts and space station battles were dogging the agency. JPL had pulled off a science mission for just $165 million, a small fraction of the cost of previous efforts like the successful billion-dollar Viking. And even though the $273 million Mars Global Surveyor swung tardily into its proper orbit last year, it is providing riveting data about the planet's surface at a quarter of the cost of the Mars Observer mission, which failed in 1993.

    Mars '98 was to be the triumphant next step. JPL was given the job of putting a spacecraft in orbit around Mars to gather data on the planet's climate while a second spacecraft dropped to the geologically complex surface of the south pole, releasing two basketball-sized probes along the way to plow into the soil as the main lander set up shop to scratch the surface for water. All that was to cost about the same as the single Pathfinder lander (see graphic). What's more, it had to weigh half as much as Pathfinder and be ready on a tight schedule to meet the biannual Mars launch window of 1998.

    However, those requirements from NASA headquarters, accepted without protest by JPL, doomed the mission from the start, according to members of the Young panel and the other investigative teams. “They embraced an impossible dream and then shut off the alarm bells,” says former JPL director Bruce Murray, a California Institute of Technology planetary geologist and consultant on the Young report.

    What appears self-delusional in hindsight seemed bold at the time. Goldin was eager for more space spectaculars to offset a flat budget, and his headquarters managers were reluctant to contradict him, say members of the Young panel. At the same time, JPL managers were terrified of losing their monopoly on planetary missions to the Applied Physics Laboratory in Laurel, Maryland, or NASA's Ames Research Center in Mountain View, California. “There were competitors out there,” says John McNamee, the Mars '98 project manager at JPL. “And Goldin would show up and threaten to kill [JPL's Saturn probe] Cassini or say he might shut JPL down,” recalls Donna Shirley, the original leader of the Pathfinder team and the lab's Mars manager before she left NASA in August 1998. She is now assistant dean of engineering at the University of Oklahoma.

    The demanding cost, schedule, and science requirements proved even more toxic when mixed with what Shirley calls the “hubris” from Pathfinder's success. Another problem was that the team of enthusiastic JPL and Lockheed Martin engineers and managers lacked direction from older mentors. Instead of one or two major missions, JPL was working on two dozen, and there was a shortage of experienced managers, the reports state.

    Despite this daunting combination of pressure from the top and inexperience at the bottom, no one except Shirley squawked publicly. “It never occurred to anyone to say they couldn't do this,” says Maria Zuber, a Massachusetts Institute of Technology geophysicist and member of the Young panel. In April 1998, just months before she resigned, Shirley warned in an International Academy of Astronautics presentation of the dangers of “overoptimism engendered by successes,” worker burnout, and increasing payloads without a corresponding growth in the budget. “In many areas we are at the limit,” she said. NASA managers ignored the warning.

    Circling the wagons

    The failure of the Mars Climate Orbiter last September shook up the team, but confidence remained high on 3 December, the day the Polar Lander was slated to set down. The prospects seemed so rosy to space science chief Ed Weiler that television cameras were allowed in the operations room. But instead of recording triumph, the cameras recorded the indelible image of stunned mission controllers and a glum Goldin.

    Two separate software glitches are the likely immediate culprits in the failure of both the orbiter and lander. Both mistakes were made at Lockheed Martin's Denver plant, where the spacecraft were built. “There is no doubt that we are responsible for both these errors,” says Ed Euler, the company's project manager for the mission.

    A poorly trained young engineer was given the job of coding navigational software for the orbiter. “The company felt [it] was a not-so-critical job,” says Art Stephenson, director of NASA Marshall Space Flight Center in Huntsville, Alabama, who chaired the panel that investigated the orbiter failure. The engineer failed to use metric units in the coding of a ground software file. JPL, which was overseeing the company, did not catch the error. Once the orbiter was on its way to Mars, a JPL navigator—described by Stephenson as “reserved”—noticed a problem with the trajectory. But his e-mails to Lockheed Martin were ignored, and he did not pursue the matter with his superiors, says Stephenson.

    The leading theory on why the lander crashed is that a software error caused the engines to shut down prematurely during descent. But “other failure modes cannot be ruled out,” states the Polar Lander investigation board chaired by retired JPL manager John Casani, because there are no corroborating flight data. A telemetry package that would have provided that information was deleted because of cost and size constraints, an omission that the Young panel calls “a major mistake.” An incomplete test of the lander's leg before launch failed to uncover the problem. The glitch was noticed only during a recent test of the 2001 lander, which has the same design. The cause of the failure of the small probes—designed to be released by the lander in flight to bury into the martian soil—remains unclear. What is clear is that they were inadequately tested. “The microprobes were not ready for launch,” states the Young report bluntly.

    But the technical glitches are only part of a much larger story. According to members of the investigative teams and the Young panel, Lockheed Martin also bid too low, forcing it to rely on younger and, thus, more affordable workers. Even then, the company was unable to hire them in a timely fashion. A stressed and overworked team at JPL could not oversee the contractor's effort properly. And the JPL team received little guidance from experienced system engineers and support from senior managers, the reports state.

    Both JPL and Lockheed took to “circling the wagons,” states the Young report, at a time when they “deviated from accepted and well-established engineering and management practices.” There was, the Young panel found, “a failure to clearly communicate” between JPL and NASA headquarters. Headquarters, for example, ordered new instruments to be added to the lander without boosting the budget. “JPL management did not effectively express their concerns” about the tight constraints, and “NASA headquarters did not seem receptive to receiving bad news,” states the report. “This combination of inadequate management oversight and violations of fundamental engineering and management principles became the underlying contributor to mission failure,” the Young panel concluded.

    Those words harken back to the report of the commission that investigated the 1986 Challenger accident. Its authors cited Marshall Space Flight Center's penchant “to contain potentially serious problems and to attempt to resolve them internally rather than communicate them forward.” They also laid much of the blame for the shuttle disaster on NASA's insistence on an aggressive shuttle launch rate.

    No turning back

    Senator John McCain (R-AZ), chair of the Senate Commerce Committee and former GOP presidential hopeful, calls the Young findings “an embarrassment to the agency” and has threatened to conduct his own investigation. “It may be time to amend NASA's mantra of ‘faster, better, cheaper’ to include ‘back to the basics,’” sneers Senator Bill Frist (R-TN). Representative Ralph Hall (D-TX), ranking Democrat on the House Science Committee, says that “it is a shame that we are stalked by ineptness. I hope that NASA heeds this wake-up call.”

    Goldin insists he will—up to a point. “These failures are not a basis for reversing our course in pursuit of revolutionary change,” he told McCain at a hearing just before the Young report was issued. However, some observers fear that the mounting attacks on NASA could roll back that policy. “I'm concerned about it getting sunk,” says Alan Binder, director of the Tucson-based Lunar Research Institute and principal invesigator of the Lunar Prospector, a mission described by many as the “poster child” of the philosophy.

    But others say there is no going back to the way NASA did science in the 1970s and 1980s, with multibillion-dollar probes that took more than a decade to build and could swallow a good chunk of a scientist's career. “Faster, cheaper, better is the only game in town,” says Zuber. “It can work—you just can't get rid of prudent testing.”

    Although many researchers were highly skeptical of Goldin's revolution in its early days, the NRC study says it has led to more launch opportunities, more flexibility, and a chance to play a larger role in the development of missions once largely the domain of engineers. “I was dubious at the start,” says Donald Brownlee, an astronomer at the University of Washington, Seattle, and principal investigator for the $205 million Stardust mission to collect comet material. “I thought cheaper missions were not scientifically worthwhile.” But now he's a convert, cautioning that it is “really important that people not overreact” to the Mars failures.

    A revamping of the way faster, cheaper, better is managed could actually improve NASA science, believes Steven Squyres, a Cornell University astronomer and principal investigator of the Mars 2001 mission. “Now when a project is in trouble, it will get help,” he says. “And as someone who has spent the last years devoted to building instruments for Mars missions, I find this absolutely delightful.”

    For NASA to increase its chances of success, however, alarms must be sounded—and answered. Zuber and others say that Goldin, who was unavailable for comment for this story, was taken aback by the Young panel's finding that people were afraid to speak up when trouble was brewing. “Make sure you say something,” he pleaded with JPL employees. “Don't hold it in.” Congress and the scientific community will be watching closely to see if the new Goldin can jump-start his old revolution.


    NASA Returns to the Drawing Board to Plan Next Wave of Mars Missions

    1. Andrew Lawler

    Last week NASA embarked on a new course for its Mars missions. But coming up with the details will be a challenge. “Everything is on the table,” says agency space science chief Ed Weiler about a review that will take place over the next few months.

    The changes will affect a complex program that was set to send orbiters and landers to Mars every 2 years, culminating in a 2008 return of a martian soil sample conducted jointly with France's space agency, CNES. Last month a team led by the Jet Propulsion Laboratory (JPL) in California proposed keeping the basic plan but delaying each mission by 2 years while adding some communications satellites. However, neither Weiler nor the Mars assessment panel led by retired Lockheed Martin manager Tom Young were impressed with the JPL plan. “It was not ready for prime time, to say the least,” says Young panel member Maria Zuber, a Massachusetts Institute of Technology geophysicist.

    Weiler has ordered a NASA headquarters team to come up with a more detailed alternative that the Young panel can review this summer. He has already canceled plans to launch a lander next year, but he has retained the orbiter (Science, 10 March, p. 1722). The most immediate concern now is whether to go ahead with the 2003 lander, which shares a design similar to the failed 1998 lander. And a decision must be made quickly. “As time passes, your options dwindle,” says Steven Squyres, the Cornell University astronomer who is principal investigator of the rover that could fly in 2003. As a result, Weiler says the team may speed up this portion of the review and deliver its recommendations by June.

    Another question mark is whether to attempt a sample return. NASA officials now say it is too complex and need not be the centerpiece of the exploration program. “The sample return is not the end-all,” says Weiler. The old plan was to use a French Ariane 5 booster to send a combination of NASA and CNES hardware to retrieve two samples from the martian surface. But NASA, burned by Russian delays in delivering parts for the international space station, may not be willing to trust the work to others. In February Weiler warned a group of U.S. scientists that “I will not sign up to a program with a foreign partner in the critical path. … I'm not about to go to my boss and say that everything depends on another country.” Weiler says that he still wants French involvement but is weighing alternatives that would limit NASA's dependence on CNES.

    That attitude disturbs French scientists. “If NASA wants to go it alone, good luck,” says Francis Rocard, CNES's solar system exploration chief. “It would be disastrous for all of us—and for scientists in France in particular.” But a French diplomat says that NASA must have a chance to rethink its planning and that a close collaboration between the two countries is still likely.

    Managers at the European Space Agency in Paris say that NASA's decision won't alter their plan to send up a Mars Express orbiter and rover in 2003. Meanwhile, the Japanese spacecraft Nozomi that was launched in 1998 will reach the Red Planet in the same year to study its upper atmosphere.


    Bringing Science to the National Parks

    1. Jocelyn Kaiser

    A new program aims to bolster the science underlying park management, but it will require a culture change among agency leaders

    When Alaska's snow machine association last year challenged a new policy to ban snowmobiles in an 800,000-hectare wilderness at the heart of Alaska's Denali National Park, the park's managers were thrown into a quandary. They could marshal plenty of studies from the Rockies and northern U.S. states showing that the machines damage vegetation and harm wildlife. But when it came to demonstrating those effects in Denali—where they suspected that the fragile subarctic ecosystem was even more vulnerable—park officials came up short. Even when they needed basic information on where caribou and moose overwinter, the most they could find were piecemeal data, for example, from a student's master's thesis about one corner of the park and from a wolf predation study. “There just isn't the information base there,” says Joe Van Horn, a park natural resource manager.

    The scramble to collect data in Denali is just one example of how inadequate science is hampering management decisions in the national park system, which includes some 270 major parks with natural resources stretching from the Alaskan mountain ranges to the coral reefs off Florida. Critics have long charged that the National Park Service (NPS) manages parks to make them look good to visitors—a strategy that can lead to very different decisions from those ecology might dictate. With few exceptions, critics charge, agency officials have tended to view science with anything from benign neglect to outright hostility. The result has been a number of decisions that have been slammed by scientists, challenged in court, and even debated in Congress, involving everything from elk management in Yellowstone, to pollution in Oregon's Crater Lake, to the restoration of the Florida Everglades.

    All that is about to change, says Robert Stanton, director of the NPS. Last summer he launched a new program, the Natural Resource Challenge, to bolster the science underlying park management. Just getting under way, the plan will invest millions more dollars in inventorying species and monitoring park conditions, hiring more scientifically trained managers, and enticing academics to conduct research in the parks—an about-face from years past, when academic research was actively discouraged in many parks. The goal, says Michael Soukup, who will lead the new effort, is to gather enough data to enable managers to anticipate problems rather than lurch from crisis to crisis.

    Biologists are welcoming these moves, but many question whether the Park Service can pull it off. Indeed, the last time the Interior Department, the NPS's parent agency, attempted a major reform—folding park basic scientists into a new agency—science was left worse off, say critics like ecologist Mark Boyce of the University of Alberta in Edmonton. “I'm optimistic but skeptical at the same time,” says David Parsons, a former NPS ecologist who now directs the federal Aldo Leopold Wilderness Research Institute in Missoula, Montana.

    Separation anxiety

    The Park Service's new plan is meant to address a litany of criticism that started in the 1960s. A 1992 study by the National Academy of Sciences, for instance, found that “almost invariably … management of the parks was done with inadequate understanding of ecological systems.” And the science that has been done has often been manipulated to support policy, critics allege. Park managers “have very carefully controlled the actual research that's done and the reporting of that research,” says ecologist Fred Wagner of Utah State University in Logan, citing studies of elk and grizzly bears in Yellowstone National Park.

    In 1993, Interior Secretary Bruce Babbitt came up with a solution: Free scientists from the influence of park managers. As part of a plan to beef up science scattered across Interior's various agencies, he pulled out their Ph.D.-level scientists—including all 100 in the Park Service—and put them together in a new science agency, the National Biological Survey. But from the outset the move was controversial, both within the agency and among outside scientists, who feared that scientists would lose touch with the short-term research needs of park managers (Science, 20 August 1993, p. 976). The fledgling agency soon fell victim to a Republican Congress concerned that its plans to inventory species might trample the rights of property owners. In 1995 Congress finally folded the biological survey into the U.S. Geological Survey (USGS), where it has been eking out a smaller budget than originally envisioned.

    The effects of moving park researchers to the new agency have been mixed, many scientists say. It did give biologists independence and enable them to “do real science,” says ecologist Rolf Peterson of Michigan Technological University in Houghton. But it also meant that park managers lacked direct access to basic researchers. “That cuts both ways,” says Peterson. Although some 750 natural resource managers—usually master's- or bachelor's-level biologists—were left in the Park Service, the loss of research scientists “was devastating to a lot of parks,” adds Boyce. At Yellowstone, for instance, park scientists were transferred to distant USGS units. In 1996 and 1997, no one bothered to do aerial surveys of park elk herds, Boyce notes. And the service's few strong research programs, such as Sequoia Kings Canyon in California, lost ground as staff members struggled to adapt to a succession of new managers and layers of bureaucracy.

    The creation of a separate agency also did little to change the cold climate that outside scientists perceived. To be sure, some top-notch research has been done in parks such as Sequoia and Michigan's Isle Royale, where an NPS-supported 4-decade study of gray wolves and moose by Peterson and others has become a textbook example of predator-prey interactions. But such studies have been the exception. Although the Park Service issues over 3000 permits to perhaps 2000 visiting research teams each year, many biologists and geologists, stymied by the confusing permit system and unfriendly attitudes of some park managers, have opted to work in adjacent national forests or military bases. As a result, the flora just outside a park's boundaries are often better documented than the plants within, says Soukup, the Park Service's associate director of natural resources stewardship and science.

    Responding to the long string of critical reports, Congress in 1998 passed the National Parks Omnibus Act, which directed that the parks' management be “enhanced by … the highest quality science and information.” As Congress was considering that measure, the most scathing critique yet came along: a 1997 book by NPS historian Richard Sellars. “The National Park Service remains a house divided—pressured from within and without to become a more scientifically informed and ecologically aware manager of public lands, yet remaining profoundly loyal to its traditions,” wrote Sellars in Preserving Nature in the National Parks. The book had a powerful impact, even more than the legislation. Among park leaders, “the light went on,” says Jon Jarvis, superintendent of Mount Rainier National Park in Washington.

    Bringing science back into the fold

    So in August 1999, NPS director Stanton announced the Natural Resource Challenge. The 5-year plan does not return Ph.D. scientists to the parks—the cadre still exists in the Biological Resources Division within USGS. But it does attempt to bolster scientific decision-making in other ways. One of the first priorities, says Soukup, is to build data banks on natural resource conditions in the parks, from mapping soils and vegetation, to tallying species, to monitoring air and water quality. To do so, the Park Service is investing $14 million this year in natural resources management, on top of the existing $100 million, much of it on a long-recommended monitoring plan and inventory of vertebrates and vascular plants. The agency, which also wants to hire 700 more resource managers, hopes Congress will fund the program at $20 million a year for four more years.

    Soukup also wants to open the parks to university researchers. Within the next few weeks, the NPS will blanket universities with brochures describing a new “Sabbatical in the Parks” program starting later this year. It will offer scientists who want to spend a few months doing field studies in parks logistical support such as housing, computers, and dry lab space. “The hope is that a lot of people will bite,” says NPS resource manager Bob Krumenaker.

    To ease the way for academic researchers, the NPS is simplifying the permitting process, says Gary Machlis of the University of Idaho in Moscow, who is now helping steer the science plan as the NPS's visiting chief social scientist. That includes putting forms on a Web site, standardizing rules for all parks, and scrapping a policy that encourages park managers to turn down a study if it can be done elsewhere.

    Scientists within and outside Interior welcome these reforms. “It's a good step in the right direction, more than I could have asked for,” says Sellars. But good intentions aside, some former Park Service scientists, such as Nate Stephenson, a plant ecologist at Sequoia Kings Canyon, question whether this new program can forge the kinds of close ties between researchers and park managers that existed when basic scientists were still part of the Park Service. “My fear is that with turnover in personnel, some of the communications that developed when we were part of the Park Service will become weaker and weaker,” Stephenson says.

    Critics also question the extent of support for the science program among top Park Service leaders. “Part of the challenge is to change the culture of the Park Service,” concedes Machlis. Each park is a fiefdom ruled by its superintendent, and superintendents vary in their interest in science, Sellars points out. Indeed, only one superintendent from the largest parks rose from the ranks of natural resource managers rather than park rangers. Soukup wants to address that by promoting resource managers and grading superintendents on how well they use science. After all, the best scientific advice in the world won't help if no one at the top is listening.


    Uphill Battle to Honor Monk Who Demystified Heredity

    1. Robert Koenig

    Scientists are renewing a drive to found a “Cold Spring Harbor East” in tribute to Gregor Mendel, whose work was rediscovered 100 years ago

    Brno, Czech RepublicIn what may be the most unlikely birthplace of a science, the discipline of genetics took root in a humble garden in the courtyard of a monastery in this ancient Moravian city. Today, a weathered stone foundation is all that remains of the garden's hothouse, and only a grass yard and a lone sycamore mark the spot where Gregor Mendel, an obscure Augustinian monk, bred pea plants nearly a century and a half ago to learn how traits are handed down from one generation to the next. What Mendel learned from those pea plants revealed the fundamental laws of inheritance.

    A monk and his disciples.

    Pavel Braveny and Eduard Schmidt are hoping Brno will host more than a statue in tribute to Mendel.


    To help mark the rediscovery of Mendel's work 100 years ago, a group of researchers has drafted ambitious plans to transform part of his old monastery, which now has only a small museum, into a modern center that would host scientific meetings and perhaps a bioinformatics institute—a kind of Cold Spring Harbor East. “We want to link Mendel's heritage to the international community of scientists,” says Emil Palecek, a molecular biologist at the Czech Academy's Institute of Biophysics in Brno.

    If Palecek and his colleagues succeed, their center would be a triumph not only for Mendel's legacy, but also for a discipline still haunted in Eastern Europe by one of the ugliest scientific frauds of last century—Lysenkoism, which poisoned genetics behind the Iron Curtain in the early years of the Cold War. But even though they have won endorsements from high-powered individuals such as Nobel Prize winner James D. Watson and Czech President Vaclav Havel, backers of the so-called Mendel Center have so far received only a lukewarm response from the European scientific community, including potential funders. They are now broadening their appeal through symposia this year to mark the centennial of the establishment of Mendelian genetics.

    History suggests they face an uphill battle. Attempts to grandly honor Mendel's scientific legacy, like the monk's own efforts to promote his laws of heredity, have been a study in frustration. Mendel was born in 1822 in what was then a province of Austrian Silesia and studied at the University of Vienna before moving to Brno, where he did all of his landmark research. He first outlined his findings in a series of lectures in 1865 and published his seminal work, “Experiments in Plant Hybridization,” in Brno in 1866. The monograph, however, was all but ignored until after Mendel, dispirited by the lack of recognition, died in 1884. It would take another 16 years for Mendel to get the credit he was due, when three prominent and competing European botanists—Hugo de Vries, Karl Correns, and Erich Tschermak von Seysenegg—rediscovered Mendel's work in the course of their own research and Correns cited “Mendel's laws” of heredity in 1900. Suddenly, the forgotten monk—thanks to the citations and the ensuing efforts of British zoologist William Bateson to promote Mendel's reputation—was hailed for his pioneering contributions to genetics.

    When Czechoslovakia was carved out of the defeated Austro-Hungarian Empire after World War I, scientists in the nascent country planned to honor Mendel by establishing a genetics research center near his monastery. The Nazi occupation and World War II dashed those plans, however. Brno scientists hid Mendel's manuscripts and notes in a local institute's safe, says Anna Matalova, director of a small Mendel museum, the Mendelianum, that now occupies several rooms of a monastery building. Shortly before the Soviet occupation in 1945, a relative of Mendel's spirited the original manuscript of “Experiments in Plant Hybridization” into Germany for safekeeping. “It's a miracle that the artifacts in the Mendel museum survived at all,” says Pavel Braveny, a Brno physiology professor who is among the local Mendel Center organizers, along with Palecek and physicist Eduard Schmidt of Brno's Masaryk University.

    The end of the war marked only the beginning of the troubles for Mendelian geneticists in Czechoslovakia, who soon came under the thumb of Trofim Lysenko, a Ukrainian agronomist who rose to power in the Soviet Union in the 1930s under dictator Joseph Stalin. Lysenko's dogmatic view that nature could be sculpted at will and the corollary—that the laws of genetics were a hoax—held sway for more than 25 years. Soviet scientists who publicly avowed the existence of genes often were banished to Siberian gulags.

    Lysenkoism infected the Soviet satellites as well. In Brno, city officials removed a stone Mendel monument—a statue of the monk clutching a pea plant—from the city's Mendel Square and stashed it in the monastery, which after World War II was converted into a hostel and government offices. During those dark days, prominent Czech geneticists led by Jaroslav Krizenecky—who had campaigned for a Mendel research center as early as the 1920s—spoke out on behalf of Mendel. He and others paid a high price, with some getting thrown in jail for anti-Lysenkoist views, according to Mendel biographer Vitezslav Orel, a Brno geneticist. It was not until 1964, after the Soviet authorities finally rejected Lysenkoism, that Brno scientists were able to organize a conference on Mendelian genetics. The Mendelianum opened the following year to mark the 100th anniversary of Mendel's lectures on heredity.

    Now some scientists are rekindling the Mendel Center idea. “Mendel was an extremely important figure, more important than Darwin in the development of molecular biology,” says molecular biologist Kim Nasmyth, who directs the Institute of Molecular Pathology in Vienna and has taken the lead on the Austrian side. He envisions an ultramodern conference center and a rebuilt greenhouse at the rear of the monastery site. Such a center, he says, could “do for Brno what the new Guggenheim Museum has done for Bilbao” in Spain—drawing international attention and thousands of visitors.

    That concept has won over a few luminaries. In a letter to Nasmyth in February, Havel said such a center would “promote a better understanding of Mendel and his extraordinary heritage.” Watson—the U.S. Nobelist who co-discovered the structure of DNA, launched the Human Genome Project, and now directs the Cold Spring Harbor Laboratory in New York—visited the Mendelianum in 1998 and says he's “very enthusiastic” about the proposal.

    So far, however, no one has come through with any cash. Recently, the European Molecular Biology Organization (EMBO) turned down Nasmyth's request for seed money. “EMBO would be delighted for scientific meetings to be organized in Brno,” says director Frank Gannon, “but we don't plan to support the provision of a meeting center,” a concept that he says is not part of EMBO's mission. One prominent European molecular biologist says that although he would like to see greater homage paid to the monk, he thinks Brno, a 2-hour drive from Vienna, is too remote for an international conference center. And Brno may find it hard to buck the trend of holding specialized symposia in resort areas such as Crete. Says one skeptic, “It's too bad that Mendel didn't do his work in a warm place with nice beaches.”

    Nasmyth concedes that his “romantic notion” of an architecturally stunning Mendel Center in Brno may take years to achieve. His group is searching for a prominent scientist, or a businessperson with scientific interests, to help set up a strategic plan. He concedes that they need to figure out how to attract top scientists to conferences in Brno, how to use the center during the weeks between conferences, and whether there's a need for another bioinformatics institute in Europe.

    Meanwhile, the center's boosters are hoping to fan enthusiasm at several conferences this year to commemorate the 100th anniversary of the rediscovery of Mendel's work. The center idea came up in passing when many of the world's top Mendel scholars and some leading geneticists gathered in Paris last month for a 3-day colloquium on “The Rediscovery of Mendel's Laws.” The concept also is being aired during the yearlong “Mendel Lectures” series sponsored by the Austrian Academy of Sciences as well as two meetings this spring and summer—the Mendelianum's forum on the history of genetics and a Mendel anniversary conference being organized by Palecek and others.

    This year's festivities are honoring one of the greatest scientific insights of all time. Those trying to establish a Mendel Center hope that, finally, the attention will result in a more concrete tribute. “Something has always emerged to block such Mendel projects,” says Mendel biographer Orel, who was fired from a Brno research institute in 1958 for daring to defend Mendelism. “If it isn't a world war, it's a money problem or a conflicting ideology.” Still, he says, “I hope this latest plan succeeds.”


    Was Lamarck Just a Little Bit Right?

    1. Michael Balter

    Pity poor Jean-Baptiste Lamarck. Today, he is remembered mostly for the discredited theory that evolution occurs when parent organisms pass on to their offspring characteristics they have acquired during their lifetimes. But this French naturalist, who lived from 1744 to 1829, was one of the great scientists of his age. He was the first to study invertebrate animals systematically, and he was an early champion of the idea that evolution rather than divine intervention was responsible for changes in plants and animals over time. But by the early 20th century, Lamarck's concept of evolution had been superceded by Darwin's theory of natural selection and the genetic laws of Gregor Mendel. And since then his name has become inextricably linked to that of his most notorious disciple—the Stalin-era agronomist Trofim Lysenko—who forced Soviet geneticists to accept Lamarckian ideas or be banned from doing research (see main text).

    Recently, however, Lamarck's name has been creeping back into the scientific literature. The reason: an explosion in the field of epigenetics, the study of changes in genetic expression that are not linked to alterations in DNA sequences. Some of these epigenetic changes can be passed on to offspring in ways that appear to violate Mendelian genetics. And although these new findings do not support Lamarck's overall concept, they do raise the possibility that “epimutations,” as they are called, could play a role in evolution. “I don't know of any evidence that Lamarck was even a little bit right, but this is possible,” says molecular geneticist Eric Selker of the University of Oregon, Eugene. “It is increasingly clear that epigenetic mechanisms play important, sometimes critical, roles in biology.”

    Epigenetic changes, which include the “silencing” of genes by such biochemical tricks as attaching methyl groups to segments of DNA so they will not be read by the cell's protein-making machinery, are involved in a host of processes, including gene regulation, development, and even cancer (Science, 15 October 1999, p. 481). Although these alterations in gene expression can clearly be passed from mother to daughter cells—for example, when a muscle cell divides into two or cancerous cells proliferate to form a tumor—they are normally “erased” when the germ cells, which give rise to the next generation, are formed.

    Yet evidence is accumulating that sometimes the epimutations are not erased. This phenomenon has been spotted in plants, fruit flies, and yeast. And the first convincing case in mammals was reported in the November 1999 issue of Nature Genetics by biochemist Emma Whitelaw at the University of Sydney in Australia and co-workers in Scotland and the United States. Whitelaw's team worked with an inbred strain of mice in which all are genetically identical and so should look exactly the same. But the coat colors of these mice varied wildly, ranging from yellow to mottled with every combination in between. Moreover, the coat color of newborn mice was highly influenced by the color of the mother, but not of the father: A yellow mother had more yellow pups than mottled, and a mottled mother had more mottled pups than yellow, violating Mendelian principles that traits are randomly distributed during reproduction.

    The team found that coat color apparently depends on the degree to which a stretch of regulatory DNA just upstream from a gene controlling coat color, called agouti, is methylated. This in turn depends on how much of this methylation state, if any, has been transferred from the mother through the germ line to its offspring. Azim Surani, a developmental geneticist at the University of Cambridge in the United Kingdom, comments that the germ cells are normally “a very efficient cleaning machine, which wipes out many of these epigenetic modifications. … The [Whitelaw] paper shows there are exceptions to this rule.” As for whether epimutations could play an important role in evolution—that is, whether they, like alterations in DNA sequence, could be favored by Darwinian natural selection—Surani says this partly depends on whether they are fairly common, compared to classic genetic mutations, or rare.

    Moreover, Surani and other researchers say, the likelihood that epimutations acquired by adult organisms will be passed on to their offspring is limited by the fact that in most animals the germ cells are segregated very early in life. In mammals, the germ cells are formed and migrate to the embryonic ovaries and testes long before the fetus is born, presumably shielding them from epigenetic modifications in the adult. But the situation might be different in plants, which produce their germ cells much later in their life cycle. In the 9 September 1999 issue of Nature, molecular geneticist Enrico Coen and colleagues at the John Innes Centre in Norwich, U.K., reported that a mutant version of the toadflax plant (Linaria vulgaris)—which results in flowers with radial rather than bilateral symmetry—is due to an epimutation. In the mutant plant, a gene called Lcyc is extensively methylated and thus not expressed—and this methylated state is heritable by subsequent generations of toadflax plants. Coen and his colleagues conclude that such epimutations might have both short- and long-term effects on plant evolution, both in their own right and because methylated genes are more susceptible to classic mutations that alter DNA sequences.

    Coen points out that Darwin, like Lamarck, believed that the inheritance of acquired characteristics played a role in evolution. The main difference between them was that Lamarck thought evolution was driven by an organism's inner need to adapt to its environment, such as in the famous example of the giraffes who stretched to reach the upper branches of trees and then passed on the phenotypic trait for longer necks to their progeny. Darwin, on the other hand, posited that natural selection of genetic alterations, rather than some “inner striving,” drives adaptive changes. Coen cautions that although the new studies of epimutations challenge the dogma “that the only heritable mutations of significance are caused by DNA sequence changes,” they offer no support at all for the idea that morphological changes acquired during the lifetime of an adult organism can be inherited in the Lamarckian sense.

    But some researchers say that the new research does suggest a potential mechanism for how epigenetic changes could play an adaptive role. “Although it would be stretching it to regard epigenetic traits as adaptations comparable to Lamarck's view of how the giraffe acquired its long neck,” comments Selker, “we do know that environmental factors, such as temperature, can influence epigenetic marks such as methylation.” And one thing seems sure: The explosion in epigenetic research has helped restore Lamarck to his rightful place in scientific history, even if he did get the big picture wrong. Says Coen: “Lamarck was a true pioneer of evolutionary theory.”


    Global Survey Examines Impact of Depression

    1. Constance Holden

    A new WHO study seeks to verify recent findings on the social and economic burden of depression worldwide using standardized instruments

    Is depression “the cancer of the 21st century?” That provocative question was raised at this winter's World Economic Forum in Davos, Switzerland. Psychiatrist Raymond DePaulo of Johns Hopkins University School of Medicine explored the issue by noting that depression, although rarely fatal, is increasingly common, devastating to the patient, and—like cancer a generation ago—shrouded by stigma.

    The fact that depression was a topic at this annual gathering of economic Pooh-Bahs is a striking sign of the emerging international visibility of mental and emotional disorders. The World Health Organization (WHO) also has stepped up its efforts to understand the problem, including the launch last month of a 25-nation study. The massive, in-depth survey of mental health conditions around the globe will focus on depression, which WHO expects to be the second leading cause of disability after heart disease by 2020. “Fifteen years ago international bodies would not have even included depression on the list of things to study,” says DePaulo.

    Recent U.S. findings have highlighted the debilitating impact of mental health conditions, depression in particular. Depression is a chronic condition, and it's associated with a range of other medical problems, from alcoholism to heart disease. Although it is often thought of as a byproduct of high-stress urban Western existence, it may in fact be even worse in poor countries. Malnutrition and infections make the brain more susceptible to mental disorders, notes Norman Sartorius, head of the European Association of Psychiatrists in Geneva, and war and social dislocations wreak further havoc on mental health. Depression immobilizes many sufferers, making for a heavy—yet at present largely unrecognized—drag on economies, lowering worker productivity and making families dysfunctional.

    The social and economic burden of depression and other mental illnesses was dramatically underscored in a 1995 Harvard study called the Global Burden of Disease. The work was done by Christopher Murray, a professor in international health economics, and demographer Alan Lopez, both now at WHO in Geneva. Using a sophisticated new method for calculating the duration and severity of a disability, known as disability-adjusted life years (DALY), the researchers found that psychiatric and neurological conditions have little impact on life-span, accounting for a paltry 1.4% of all deaths. But these conditions represented an astounding 28% of all disabilities. Indeed, Murray and Lopez's group found that depression is the leading cause of disability worldwide in terms of number of people affected. What's more, four of the other top 10 causes also relate to behavioral rather than physical illness. These are alcoholism, bipolar disorders, schizophrenia, and obsessive-compulsive disorders.

    Largely because of declining mortality from infectious diseases, they estimate that depression, which ranked fourth in DALY listing for 1990, will claim second place by 2020, above traffic accidents and trailing only ischemic heart disease. “I see depression as the plague of the modern era,” says Lewis Judd, former chief of the U.S. National Institute of Mental Health (NIMH) and chair of the psychiatry department at the University of California, San Diego.

    Although the global burden report has shaped thinking in the past 5 years, its rankings remain controversial because the data are incomplete. “Everyone has been using different instruments and different designs” to gauge mental health problems, says psychiatrist Sing Lee of the Prince of Wales Hospital in Hong Kong. Researchers hope the new WHO survey will clear up the confusion. “This time, the idea is to use standardized instruments translated in a locally culturally valid manner,” says Lee, chief scientific adviser for the survey in China.

    The study, which began last month in France, requires 2-hour face-to-face interviews with 150,000 people, teenaged and older, from North America, Western Europe, Mexico, Chile, Cuba, Colombia, Ukraine, South Africa, India, China, Japan, Indonesia, and New Zealand. Researchers hope to complete the survey by mid-2001. Local people are being trained to do the interviews, in which subjects will be asked about emotional symptoms, substance abuse, and psychosis as well as chronic health problems such as arthritis or back pain. The questions on depression were taken from standard depression scales, and all of the questions have been translated from English and then back-translated to ensure that the meaning has been retained. Sartorius notes that the questions also need to be modified to fit local conditions. “Do you have a fear of elevators?” he notes, might be turned into a query about mountains in rural India.

    By correlating the answers to the questions dealing with depression with those relating to chronic physical problems, researchers should get a better sense of how much of the iceberg of depression lies below the surface. “We don't know how many people with headache or fatigue are really suffering from depression in disguise,” says study designer Ronald Kessler, an epidemiologist at Harvard Medical School in Boston.

    The survey includes retrospective questions in order to uncover the impact of major political events. In war-torn Lebanon, for example, a survey in the early 1990s revealed sky-high rates of major depression. In South Africa, some 5000 people will be asked additional questions about the effects of violence and racism. And in China, the survey may cast light on the high incidence of suicide among young rural women in recent years. It could also probe the psychological impacts of the one-child-per-family policy that has been in effect for 20 years—revealing whether China has produced what Kessler calls “a generation of little narcissists.”

    The WHO survey, funded by a variety of public and private sources in each country as well as by international donors, isn't the only game in town. In September the NIMH hopes to receive the results of a report by the Institute of Medicine on chronic neuropsychiatric problems, including depression, schizophrenia, epilepsy, stroke, and developmental disorders, in the developing world. NIMH director Steven Hyman says that he hopes both exercises will help show governments in developing countries that it makes economic sense to pay attention to mental health problems. With countries scarcely able to keep up with such pressing medical problems as malnutrition and AIDS, it's not realistic to expect them to create new mental health infrastructures, Hyman acknowledges. Instead, he says, “we need to develop models where depression is identified and treated in primary-care settings worldwide.”

    With some countries lacking even the most rudimentary training for mental health professionals, Sartorius admits that improvements will come slowly. But he hopes that the weight of the next round of surveys and reports will, “like water wearing down stone, eventually convince the public and politicians that this is a priority.”


    Demand for Tech Workers Benefits Undergraduates

    1. Jeffrey Mervis

    The rising number of H-1B visa applications has created a pot of scholarship money at NSF, although not all universities are bidding for it

    For years, U.S. high-tech companies have complained that technically trained workers are in such short supply that they need to import tens of thousands of foreign scientists and engineers to keep labs running and production lines humming. In 1998, these companies successfully lobbied Congress—over the objections of some unions and engineering organizations—to temporarily increase the maximum number of visas for foreign technical workers from 65,000 to 115,000 a year.

    Companies aren't the only ones benefiting, however. Thousands of U.S. undergraduates majoring in computer science, engineering, and mathematics (CSEM) will soon be awarded scholarships under a new National Science Foundation (NSF) program, established by the visa legislation, that is aimed at boosting the supply of homegrown skilled labor. And the number of such scholarships, earmarked for students on tight budgets, could jump significantly if Congress again raises the cap, as seems likely.

    The scholarships are funded by a portion of a $500 application fee that Congress imposed on applications for technical workers, known as H-1B visas. NSF was asked to administer the scholarship program and invited universities to bid for the $22-million-a-year pot. Over the past few weeks, it has chosen 77 of a projected 110 winners from some 280 community colleges, 4-year schools, and graduate research universities that competed for this first round of institutional grants, which average $220,000. In the next few months, each school will choose recipients for the $2500 annual scholarships, which are renewable for a second year.

    Al Cherry hopes to be among them. A mathematics major at Grambling State University—a historically black school in north central Louisiana that has received one of the NSF grants—Cherry would like to teach at the high school he once attended in Monroe, Louisiana. The scholarship would allow him to cut back on the 20 hours a week he works in the collection department at Chase Manhattan Bank—a job he needs to support himself and his wife while he's in college—and devote more time to his studies. “When I get home [after a 65-kilometer commute], I don't always feel like doing schoolwork,” he says.

    Help wanted.

    House and Senate bills would boost the ceiling on visas, top, and generate more money for scholarships, above.

    That pot may soon grow. Last month, Representative David Dreier (R-CA) introduced a bill (H.R. 3893) that would net NSF $30 million for scholarships in 2001 by raising the visa cap to 200,000 and doubling the application fee to $1000. A bill moving through the Senate (S. 2045), introduced in February by Senator Orrin Hatch (R-UT), would have a similar effect on the NSF program. The measures have attracted bipartisan support and strong industry backing. “The more that we can do to get our kids involved in these fields, the better it will be for the U.S. economy,” says an aide to Representative Adam Smith (D-WA), a co-sponsor of the Dreier bill.

    The scholarship program has struck a chord among public colleges and universities, where $2500 a year can go a long way. “This will cover an entire year's tuition, plus some book money,” notes William Velez, a professor of mathematics at the University of Arizona in Tucson, which has received one of the grants. “I hope that this will allow students to work less and study harder, raising their GPA [grade point average] and making them more competitive in the job market.”

    The scholarships also give institutions a chance to expand efforts to attract and retain minority students and women in science, engineering, and math. “It fits well with our existing activities,” says Anthony Sebald, associate dean for academic affairs within the engineering school at the University of California, San Diego (UCSD). The school runs a mentoring effort aimed at reducing the traditionally high attrition rate among undergraduate engineering students, and Sebald views the 22% women and 13% underrepresented minorities among the school's 2600 majors as evidence of progress that the CSEM program can build upon. “We see it as another opportunity to help a group of students who are grossly underrepresented in these areas,” adds Michael Forman, associate dean for science at Purdue University in West Lafayette, Indiana, another winner.

    Highly selective private schools, in contrast, were conspicuous by their absence from the first round of competition. Indeed, many administrators said they were unaware of the program. Two possible reasons for the lack of interest are the steep tuition at such schools and the relative dearth of eligible students. “A year at Brown costs $34,000, and in addition there are very few students of color in computer science,” says James Wyche, associate provost at Brown University and head of a national project that partners the Ivy League schools and other elite universities with so-called historically black colleges and universities (HBCU). “You have a better chance of tapping a large pool of students at an HBCU.”

    Grambling is such a school. Maddupu Balaram, head of the computer science and mathematics department and principal investigator of the project funded by the school's NSF grant, says he's pleased with the chance to lighten the financial load on students. The grant will also provide students with more time to take advanced courses and carry out research projects, says Balaram. “Our undergraduates have no trouble getting jobs, but I'd like them to pursue graduate degrees in specialized fields, which would make them even more attractive to employers.”

    LaNessa Jackson, a sophomore majoring in computer science, says a scholarship would reduce her debt and let her spend more time with faculty members in preparation for graduate school. She also sees the award as an important carrot for minority students who, in addition to the financial burden, might not feel capable of pursuing a technical degree. “A lot of people I know run away from computer science because they think it's hard and because they don't know other minorities who have succeeded,” she says.

    The lack of role models never bothered David Morales, a junior majoring in computer science and math at the University of Arizona, another grant recipient. “I've always wanted to get a Ph.D.,” says Morales, who grew up in a Hispanic farming community outside Tucson and who is working 30 hours a week to help fill the gap left when his father was laid off last summer from his job at a copper mine. “My dream is to go to [the University of California,] Berkeley and then come back to Arizona and start programs to help other minority students learn math the right way.”

    Federal officials hope that the CSEM program will help students at Grambling, Arizona, UCSD, Purdue, and elsewhere to reach their goals. Their success, in turn, could produce a wave of U.S. students who can meet the ever-growing demand for high-tech workers.

  18. Biologists and Engineers Create a New Generation of Robots That Imitate Life

    1. Gary Taubes

    As they learn to walk, crawl, and fly, biologically inspired robots advance both robotics and scientists' understanding of how animals move

    The robots developed at Case Western Reserve University in Cleveland may have unimaginative names—Robot One, Robot Two, and Robot Three—but they make up for it in looks. “All three so far are six legged,” explains Roger Quinn, the mechanical engineer who built them, “but they get more and more like an insect.”

    To be specific, they get more and more like a cockroach. The legs of Robot One, for instance, emerge from directly beneath its balsa-wood body and are distributed in the simplest possible hexagon. Robot Two adopts a sprawl posture, with the legs, cockroachlike, on the outside of the body. The legs of Robot Three are specialized to look and act like cockroach legs—small, mobile front legs for grooming and exploring the environment; medium-sized middle legs; and big, powerful rear legs for running and jumping.

    This biological mimicry gives Robot Three the general gestalt of the urban dweller's worst nightmare: a 14-kilogram, bread-box-sized creation that not only looks like a cockroach but promises to walk, run, and jump like a cockroach. The only feature that's obviously not inspired by biology is the tether that supplies power to the pneumatic air compressors which serve as muscles. This tether means Robot Three cannot move about on its own. But Robot Four—which will also be modeled after a cockroach, says Quinn, “only more so”—will carry its power supply with it. It “will be able to run around the campus,” no doubt to the delight of Case Western students and faculty.

    The Case Western robots are among the vanguard of a new army of biologically inspired robots emerging from laboratories throughout the world. Indeed, a revolution seems to be going on in robotics, fueled by new insights and generous financial support from the Defense Department—in particular, from the Defense Advanced Research Projects Agency (DARPA) and the Office of Naval Research (ONR), which together are pumping tens of millions of dollars into the field.

    This largesse is promoting a union between engineers and biologists, and is spawning a new generation of swimming, flying, and crawling robotic offspring. Alan Rudolph, manager of DARPA's 2-year-old controlled biological and biomimetic systems program, says his goal is to create robots that can go where humans either can't go or where it's not safe to send them, such as the surfaces of other planets, the bowels of a burning building, or the risky confines of a minefield or a battlefield. The idea, he says, is to take inspiration from the natural world, rather than build on existing machines. Why use two legs or four wheels to maneuver when a six-legged insect will do the job better? If you want a vehicle that moves sideways and diagonally as effortlessly as back and forth, why not find your inspiration in an eight-legged spider that can do just that? To put it simply, why not let evolution do your thinking for you?

    For biologists, the lure is to create a moving, three-dimensional model to test their theories of how animals function. “The experiments you do on the robots tell you what you ought to be looking for within the animal,” explains Joseph Ayers, a neurophysiologist at Northeastern University in Boston, who is now working on a robot lobster (see sidebar on p. 82). “To say it works both ways is an understatement.”

    Gil Pratt, for instance, an electrical engineer and computer scientist who runs the Leg Lab at the Massachusetts Institute of Technology (MIT), describes his motivation as twofold: to build a robot to do housework for him—“I'm a lazy person by nature,” he explains—and to understand the mechanisms of control, balance, and locomotion in animals and insects. “It's very easy to theorize about how biological systems work. What's great about building robots is you can actually test the theories.”

    “The art in this is what you take from biology,” adds Michael Dickinson, a neurobiologist at the University of California, Berkeley, who is working on a robot fly with DARPA support. “If you really want to make a useful robot, you don't want to just copy nature. You want to extract principles at the right level”—even if the resulting robot doesn't look much like the organism that inspired it (see sidebar on p. 81).

    From animals to robots

    Robot-building collaborations are often sprawling and involve unlikely partners: computer scientists, mathematicians, electrical and mechanical engineers, as well as biologists and zoologists. At Case Western, for instance, the program started with Randall Beer, a computer scientist who was frustrated by the slow progress in artificial intelligence and thought guidance might be found in the nervous systems of simple insects. Beer began collaborating with Case Western biologists Roy Ritzman and Hillel Chiel, and created a computer simulation of a walking insect. They then recruited Quinn, the engineer, to build hardware models to test the software simulation in the real world. Robots One, Two, and Three were born.

    As each robot has progressively captured more of the mechanical and control complexity found in the animal, says Quinn, the robots have become increasingly capable. Robot Two, for instance, can walk over rough terrain, whereas Robot One cannot. Robot Three can climb. And having a larger-than-life-sized cockroach to manipulate has taught the biologists a lot, adds Ritzman. For example, to get the robots moving fluidly, the researchers needed to incorporate input from strain gauges in the body into the computer that controls the robots, suggesting that the actual animal relies on organic structures that do the same thing, he says. “These insights have led us to propose many more experiments in the future” on the cockroaches themselves.

    Beer eventually returned to the world of software simulations, while Ritzman—who spent 20 years studying the instinctive mechanism the cockroach uses to flee predators or an approaching rolled-up newspaper—has continued to work with Quinn on robots. “We started getting a lot of money from DARPA to build robots based on how cockroaches walk. I started getting less money for cockroach escape, so I took the hint,” he says. “I thought I was getting into a hobby, and it took over my career.”

    Other biologists tell similar tales. Ayers, for example, spent much of his early career studying how the lobster's nervous system controls its behavior and locomotion. Now he leads his own team of engineers working on a robot lobster, after he realized that the firing pattern the lobster's nervous system uses to move the animal's limbs would work well for almost any six-legged creature, including a robot one. He had his epiphany when DARPA asked him if it was possible to put sonar on live lobsters, apparently for use as unobtrusive underwater espionage agents. “I naïvely said it would be easier to build a lobster robot,” says Ayers. “Now I appear to be a card-carrying roboticist, but it's very clear to me that my training in neurophysiology is expert training for a roboticist and much better at the control end than anything a mechanical engineer gets.”

    For other researchers, robotics offered a new avenue of research when studies of live animals hit a dead end. Dickinson and Charlie Ellington of Cambridge University, for instance, both started off studying how real insects fly. Ellington's lab gets credit for the observation that the flapping of insect wings seems to generate two to three times more lift than can be explained by conventional aerodynamics. (This led to the misconceived suggestion that science can't explain why the bumblebee flies.) “We took our studies of real insects about as far as we could,” says Ellington, “and we wanted them to do things that they couldn't, so we had to build our own.”

    Specifically, Ellington concluded in the early 1990s that his research would benefit mightily from an insect that could release smoke from its wings on command, something real insects resolutely refused to do. So he and his colleagues built “the flapper,” a mechanical, computer-controlled, scaled-up model of a Hawk moth that emits smoke as it flaps its mechanical wings, allowing Ellington and his colleagues to visualize the air flow around and over the wings. A similar line of thought led Dickinson and Berkeley engineer Ron Fearing to build a scaled-up model of a robotic fly. To create a robot big enough to work with, they had to scale up not just the fly's body but the size and relative strength of the forces on it—the viscosity of air matters considerably more to a fly, for example, than it does to a pigeon. The researchers adjusted the forces to mimic those acting on a real fly by playing with variables such as the speed of the wings and the medium around them; they ended up with a robotic fly with a half-meter wingspan slowly flapping its wings in 2 tons of mineral oil. That robot allowed them to directly measure the forces generated by the wings.

    In the last few years, Ellington's and Dickinson's labs together finally managed to explain the lift generated by insect wings. As published in a handful of papers, insect flight is the result of decidedly unconventional aerodynamics—a threesome of phenomena, involving the creation of a spiral leading-edge vortex (also known as a delayed stall), rotational lift, and wake capture. All three phenomena had been identified years ago and had been suspected of providing the necessary lift to insects, but the robots allowed researchers to measure the actual forces involved and provide the requisite experimental verification.

    Now both groups are designing small flying robots for DARPA. Ellington says his recent wind-tunnel studies suggest that insect wings mounted on rotors, like a helicopter, generate as much lift as flapping insect wings and that a rotor design may be much more practical for a working robot. “My own aim,” he says, “has always been to be able to build my own insect and fly it around the room under remote control. If you can do that, then you really understand how they're flying.” That goal, however, is still years away.

    Learning from robotuna

    A similar desire to understand how animals manage their feats of locomotion led Michael Triantafyllou, an MIT oceanographic engineer, to probe the secrets of fish propulsion. Triantafyllou decided to build a robot tuna to study underwater vortices, in particular the vortices that propel fish forward rather than dragging them back. Real tuna are champion long-distance swimmers, and so presumably their physiques are highly evolved to manipulate the forces around them as they swim. Triantafyllou and his collaborators used a taxidermist's cast of a bluefin, then built an internal musculature of six “links,” each of which can be swiveled back and forth by its own motor. The links are covered by plastic and aluminum ribs supporting a skin of padded foam and the same Lycra of which swimsuits are made. The 2-meter-long robotuna is only “a lab robot,” says Triantafyllou, because it lives in its own water tank, the undersea version of a wind tunnel, and is attached to an overhead carriage through a thin strut that holds it in place and transmits commands and electricity to the motors. By using a robot tuna in lieu of the real thing, Triantafyllou and his colleagues can precisely control its motion and the amount of energy it expends to swim. They can then compare that energy to the propulsive force the robot exerts, while studying the flow of water over its body.

    Triantafyllou and his colleagues realized that as the tuna swims, swishing its vertical tail fin from side to side, it minimizes the amount of energy needed to form the propulsion vortices while also controlling the flow around its body to reduce drag and turbulence. “Both mechanisms are hard at work,” he says.

    The MIT researchers then proceeded to build a robot pike. Their creation is about 1.3 meters long and composed of three links plus a tail. “This was primarily constructed to study acceleration,” says Triantafyllou, as the pike is a “a very aggressive and agile fish, a master at fast starting and turning.” The robopike is autonomous, meaning it needs no strut for support, has onboard batteries for power, and receives commands via a wireless modem in its nose. “You can put it in water and it starts swimming,” says Triantafyllou.

    By manipulating the robopike to precisely follow commands as real fish never would, Triantafyllou and colleagues learned how fish maneuver—“an exercise in vorticity control,” says Triantafyllou. In order to turn, the fish has to push hard on the water to get going, and that requires, in effect, creating a temporary jet of water on one side. “The way they do it is by bending their body, which begins the formation of two very large vortices, and then the tail spins the one closest to the tail and then [the tail] spins the other vortex, closest to the head, to generate the two-vortex pair, which shoots out and generates the force needed for fast starting or turning.”

    Now, with support from ONR and DARPA, Triantafyllou and his colleagues are embarked on a pair of collaborations to build autonomous underwater vehicles (AUVs) that are more agile and maneuverable than existing miniature submarines. “If you compare a dolphin, for instance, with an AUV,” says Triantafyllou, “the most striking difference is the ability of the dolphin to turn on a dime. So if you need to operate in areas that are cluttered, shallow, or with lots of waves, or if you want to do dangerous kinds of work, you want these very dexterous robots that can move quickly, position themselves in currents, and pack a lot of power.”

    The end result of all these collaborations is likely to be a world of new bio-inspired robots to help humans, although so far few robots have successfully made the leap out of the controlled environment of the lab into the unpredictable territory of the real world. Advocates argue that there's another reason for pursuing this line of work: The technology developed will likely yield tremendous unforeseen benefits later—what Dickinson calls “the moon shot” rationale. “The amount of technology that needs to be developed to build something like an autonomously flying insect is extraordinary,” he says. “Fifty years from now, people will be talking about the technology that came off these projects in the same way they now talk about the technology that came out of the space program. There's nowhere near the same amount of money going into it, but we're going to reap similar rewards in terms of the technology.”

  19. Better Than Nature Made It

    1. Gary Taubes

    As poets are inclined to point out, the ways in which inspiration can be taken from nature are wondrous, infinite, and varied. In the pursuit of bio-inspired robotics, inspiration doesn't always mean simply copying nature. Instead, many roboticists believe in distilling the fundamental principles at work in organisms and then incorporating those principles in robots, creating a machine that may look nothing like the organisms that inspired it. “We think blind copying is exactly what you don't want to do,” says Robert Full, a biologist at the University of California, Berkeley. “You will fail miserably, because nature is way too complex.”

    Full has studied the locomotion of organisms with two, four, six, eight, and 44 legs—the latter being centipedes—and has concluded that their locomotion is all based on the same basic model, known as a spring-mass system. In effect, they all bounce as they run, like a mass on top of a spring, using alternating sets of legs. “To put it simply,” he says, “they act like a pogo stick. And all these legs are bouncing along with the same patterns. The easy way to think about it is that one of your legs works like two legs of a trotting dog or three legs of an insect or four legs of an eight-legged crab and so on.”

    As to why insects have sprawled postures with their legs on the outside of the body, while humans, dogs, and cats do not, that can be boiled down to a second general principle, says Full. Working with Princeton University mathematician Phil Holmes and others, Full showed that the sprawled posture serves as a self-stabilizing system. As the organism runs along on uneven ground or is buffeted by a predator or a gust of wind, the sprawled legs can absorb the sideways motion and keep the organism's center of mass over its legs where it belongs. “A leg sticking out can act as both springs and shock absorbers,” says Full. “Bend it to one side, and it just tosses you back.”

    Full believes these two observations are general principles of effective locomotion and that any robot that employs them will display the benefits. As supporting evidence, he offers up Robot Hexapod, or RHex. RHex was designed to utilize pogo stick legs and sprawled posture to get by in the world, no matter how rough the terrain. Built by a collaboration of researchers led by engineers Dan Koditschek of the University of Michigan, Ann Arbor, and Martin Buehler of McGill University in Montreal, RHex is roughly the size of a shoe box and weighs 7 kilograms. It has a six-legged sprawled posture and C-shaped plastic legs that provide the necessary springiness and self-stabilization. The legs are mounted on hip joints that rotate a full 360 degrees, taking the legs around with them. RHex doesn't look much like an insect until you see it walking across rough terrain, which it does effortlessly, with neither eyes to see nor nerves to feel, at a speed of a meter a second.

    “It's the fastest running legged platform I know of,” says Alan Rudolph, who manages the controlled biological and biomimetic systems program at the Defense Advanced Research Projects Agency. “And it's pretty simple but quite stable.” Or as Full puts it, “RHex demonstrates the point that you don't need to copy things to make a better robot.”

  20. Making a Robot Lobster Dance

    1. Gary Taubes

    It's one thing to make a robot walk, quite another to make it engage in all the complex behavior that a biological organism might demonstrate. But that's what neurophysiologist Joseph Ayers of Northeastern University in Boston is trying to achieve with his robot lobster, which has been designed to function as an underwater, remote-sensing autonomous vehicle.

    Ayers has created the robot's control system by taking films of lobsters in motion, breaking them down into specific movements and postures—sideways crawling with claws extended, for instance—and then turning that into a matrix tying the 21 basic lobster movements and postures to the commands that put the robot in motion. For instance, rotating to the left can be done by telling the lobster's four right legs to move forward while simultaneously instructing the four left legs to move backward. This control matrix, says Ayers, “plays like a player piano. You would be amazed at how accurately it describes the behavior. It flabbergasted me the first time I got it working.”

    The control system may be the easy part, however, compared to the engineering. Rather than using motors to move the robot around—using them as “actuators,” in roboticists' lingo—Ayers has opted for “muscle wire,” a technology brought to his attention by his brother-in-law, an amateur inventor. Muscle wire contracts when heated, which can be done by sending a current through it, but it has to be cooled again to get it to relax. “We have had to learn how to train this stuff,” says Ayers. “There's just a lot of basic engineering that has slowed us down.”

    At the moment, the movements of the robot lobster are limited to walking and turning. But eventually Ayers expects his creations to do everything real lobsters do and maybe do them better. The robot lobsters will have sonar to navigate and to receive instructions, and they should be able to crawl across the ocean floor, avoid obstacles, and demonstrate “investigative behavior.” Eventually, the goal is to use the lobster to sniff around curious or suspicious objects like potential undersea mines.

    Ayers realizes he has a long way to go, but he's optimistic if not downright impressed with what he's wrought. “When you see this robot on the Web page,”* he says, “it will blow your mind.”

  21. In Nature, Animals That Stop and Start Win the Race

    1. Elizabeth Pennisi

    Researchers studying how animals move in the wild find that intermittent locomotion offers a surprising array of advantages over keeping a steady pace

    In 1995, marine physiologist Terrie Williams was stumped. After studying the oxygen requirements of diving dolphins, she had carefully calculated that dives to 200 meters required 28% more oxygen than the animal could possibly inhale or have in reserve. Deep, prolonged dives might well be fatal. Yet in field experiments, somehow her study subjects—trained bottlenose dolphins—easily plummeted to depths well below 200 meters and returned safely, with ample reserves of oxygen. Now after 5 years of arduous field experiments—strapping videos to the backs of dolphins, seals, and whales, in both the Pacific and Antarctic oceans—Williams and her colleagues at the University of California, Santa Cruz, have finally discovered the diving dolphins' secret.

    As she reports on page 133, rather than swimming—and consuming oxygen—all the way down, dolphins take a few strokes and then glide as long as possible, a trick biomechanicists call intermittent locomotion. By doing less work, the animals use less oxygen, and so can dive deeper and longer. This was quite a surprise, for dolphins and whales have been intensely studied for years and no one had any inkling that this diving behavior existed. “Only by going back and looking at the behavior [in the field] could we find this out,” Williams notes.

    For decades researchers have emphasized steady-state locomotion, bringing organisms into the laboratory and watching them move at a steady pace. Besides studying dolphins and fish in flow tanks, for example, they used wind tunnels for birds and treadmills for creatures from mice to kangaroos. But Williams's finding is just one of a stream of recent results indicating that that focus was only a first step. The new work shows that animals from aquatic invertebrates to humans move like window shoppers, stopping and starting as they seek out food, mates, or shelter.

    The findings have “really begun to cast doubt on the way we have looked at locomotion in animals in the past,” says Frank Fish, a functional morphologist at West Chester University in West Chester, Pennsylvania. “A whole new area is opening up in the way we perceive energetics in organisms.” Probing the fitful nature of locomotion is helping researchers understand how various organisms' bodies and biochemistry are adapted for movement, and it may even have applications in human medicine.

    For example, at a recent symposium* organized by comparative physiologist Randi Weinstein of the University of Arizona, Tucson, researchers compared notes and found that intermittent locomotion has surprising benefits for organisms—everything from allowing time to notice the surroundings to saving energy. In some respects human muscles appear to be more efficient when working intermittently than when working steadily, for example. This suggests better ways of helping people make the best use of their bodies, says Weinstein. “By inserting rest and pauses, and changing the [exercise] interval, it might be possible to decrease the physiological load on the body [of] people with compromised systems,” such as those recovering from heart problems, she says.

    But integrating behavior, biochemistry, physiology, and biomechanics to understand intermittent locomotion will not be easy. There are good reasons why researchers analyzing locomotion previously concentrated on steady-state experiments. Such analyses suited theorists, as modeling the mechanics and energetics of steady movement is more tractable than the mathematics of changing gaits or repeated starts and stops. And designing effective experiments in natural settings can be a challenge, says Williams, who should know.

    To solve the paradox of how marine mammals dive so deeply, her team first had to develop or track down technologies to monitor the animals as they swam. The researchers coaxed a prosthetics manufacturer to make a custom-molded plastic housing to fit over a dolphin's dorsal fin, mounted a pressurized case containing cameras on the plastic, and then stabilized the device with a thin strap around the dolphin's belly. On some dives the camera was mounted facing backward to view the fluke, and other times it faced forward to view the flippers. Other sensors tracked depth, temperature, speed, and acceleration. Even with this added bulk, “the animals did dive quite well,” Williams says.

    The videos revealed that a dolphin strokes with its tail at the beginning of a dive, but then spends as much as 2 minutes motionless. Intermittent swimming and gliding continue for both the descent and the ascent. Weddell seals and blue whales—monitored by video cameras attached to their backs with high-tech suction cups—have adopted this strategy too, Williams's team found. For dives of less than 50 meters, the seals stroked the whole way down, but to go deeper, they did a series of strokes and glides. Like the dolphins, the seals glided about 80% of the time during dives deeper than 300 meters. Moreover, “the deeper they dive, the more time they spend gliding,” Williams points out. Once below 80 meters, the air in the lungs is compressed enough that the animals start to sink without effort, and gliding becomes quite effective. “The animals take advantage of the change in pressure and the resulting change in buoyancy,” she says.

    That was apparent in one dive by a blue whale. By the time the whale reached 90 meters, it was spending 80% to 90% of its time sinking. Strokes were so slow that the researchers had to speed up the film seven times to see them, Williams says.

    To determine whether the animals really saved oxygen, she tested Weddell seals in the Antarctic. Her team drilled a hole into the ice, covered it with a Plexiglas dome, and then measured the gases respired by a seal returning from a dive. The researchers found that intermittent swimming is much more effective in terms of oxygen use. Continuous swimming demands 65 milliliters of oxygen per kilogram of the animal's weight, while the swimming and gliding mode requires only 45 ml/kg.

    Both these results and the effort put into getting them are quite impressive, says Robert McLaughlin, a behavioral ecologist at the University of Guelph in Ontario. “I had no idea that seals and other marine mammals were using intermittent locomotion, and what [the researchers] had to go through to make those measurements was amazing,” he says. The effort demonstrates the payoff of a more natural approach to animal locomotion, adds Robert Full, a biomechanicist at the University of California, Berkeley, and co-organizer of the symposium. By working with animals in the field, Williams “got results that completely changed her point of view,” he notes.

    Intermittent locomotion appears to have the energetic edge in other organisms too, including humans, as James Timmons, a physiologist at Pfizer Central Research in Sandwich, United Kingdom, has shown. He and Paul Greenhaff of the University Medical School in Nottingham have been studying the constraints on muscle function at the molecular level. In particular, they have been looking at the effects of exercise on an enzyme called pyruvate dehydrogenase, which plays a key role in energy production by muscle. Their work and that of others are refining current views of how active muscle regulates its energy use.

    These studies show that when a muscle works hard, it turns to the glucose stored in the polysaccharide glycogen for the necessary energy. To access that energy, the glucose is split out of the glycogen and converted in a series of steps into pyruvate. Pyruvate dehydrogenase comes in at this point, converting the pyruvate into acetyl molecules (acetyl-CoA), which in turn are put to use by mitochondria, the cell's power plant, to produce ATP, the energy currency of the cell (see diagram). Extra pyruvate is shunted into another pathway that creates lactic acid. If ATP is not produced fast enough, the muscle turns to another energy source, phosphocreatine, a last-ditch source of energy. But when the phosphates that are a byproduct of this fuel build up, they and the accumulated lactic acid cause the muscle fatigue familiar to every athlete.

    In previous work, Timmons and others had found that when a muscle is at rest, pyruvate dehydrogenase is held in check by having a crowd of phosphate molecules attached to it. During exercise, when energy demand is high, the phosphates are removed, activating the enzyme and allowing the muscle to switch to glycogen fuel and to contract. Afterward, another enzyme puts the phosphates back on, taking several minutes to complete the job.

    To probe the molecular consequences of intermittent exercise, Timmons and his colleagues had groups of eight to 10 volunteers do several 8-minute sets of knee extensions, with rests between each set, and took small samples of leg muscle before and after each set. They found that when people rest briefly and then work the same muscles again, fewer and fewer phosphates latch onto the dehydrogenase during each successive rest. As a result, the enzyme is activated faster, and “the muscle is better able to cope with subsequent contractions,” Timmons explains.

    In addition, with each new round of exercise, the muscle became more efficient at using glycogen, as shown by levels of glycogen, lactic acid, and phosphocreatine in the muscle. In each subsequent round of exercise, the researchers found a slower buildup of lactic acid and less dependence on phosphocreatine. “With each subsequent bout, the muscle gets smarter, so it directs more glycogen toward [ATP production] instead of lactate formation,” says Timmons. “[The muscle] is clearly working more efficiently from a metabolic control perspective.”

    Timmons hopes that others will examine these biochemical parameters in other species as a way of understanding why animals “walk and wait” as they move. So does Full. “For a long time we had assumed muscles were built to operate best at a steady state,” he points out, “but things might be built to turn on and off and make transitions.”

    Energetics advantages aside, it seems that animals sometimes have good behavioral reasons to move in fits and starts. At the comparative biology meeting, McLaughlin of Guelph described a literature survey that he conducted with Donald Kramer of McGill University in Montreal. They analyzed 175 field and laboratory observations, some dating from the 1970s, and found that many animals—birds, lizards, and chipmunks, for example—stop periodically as they search for food or seek mates and shelter. And “there are hints in the literature that animals stop for sensory reasons,” rather than to save energy, McLaughlin said. Some researchers theorized that animals were pausing to check where they needed to go next, for example.

    So McLaughlin and Kramer delved into the literature on vision and perception and found that, sure enough, motion can interfere with detection of potential prey and predators, particularly mobile ones. In other words, it's hard to run and look around at the same time. “It may be simpler for the animal to stop” than for the brain to process so much rapidly changing sensory input, says McLaughlin. Kramer and a McGill undergraduate, Andrew McAdam, have field evidence consistent with this, showing that food-hoarding squirrels appear to pause in order to spot predators.

    As the researchers reported in 1998 in Animal Behavior, they put piles of nuts either in the open or among trees, and observed the behavior of squirrels as they approached the feeding stations and as they returned to the trees to hide the nuts. The animals stopped far more often when they were heading into the open—where hawks and dogs could catch them—than when food was in cover under trees or when they were racing back to their burrows, already avoiding predators as best they could. “The need to extract and analyze information from environments may be a reason for moving intermittently,” Arizona's Weinstein concludes.

    But there may be still other reasons as well, depending on the type of organism. For example, researchers find that small aquatic creatures such as copepods and jellyfish also follow the now-familiar pattern of swimming, pausing, then swimming again. Yet Thomas Daniel and his group at the University of Washington, Seattle, have analyzed the forces generated by and acting on these small animals and found that, unlike dolphins and whales, these little creatures are far too small to achieve the momentum needed to coast. To the contrary, once these organisms stop, they must work hard to overcome the viscosity of the water and start up again.

    Instead of saving energy, Daniel thinks the intermittent motion is a way to enrich the animal's environment. As a tiny shrimplike copepod swims away from a spot depleted in nutrients, for example, its wake sets up a small amount of turbulence that washes more fresh water to the new location. If the animal kept moving, its wake would always be behind it, but by stopping it sits squarely in the wake's path. “As you move, you increase the flux of oxygen to the body,” Daniel points out. “It's possible that the energy disadvantage to moving is offset by an advantage in the flux of nutrients and gases.”

    The more Daniel and others look into the question of why various species move in fits and starts, the more possible advantages they find. “It's up to us to figure out if there's a behavioral, physiological, or biomechanical advantage,” says Williams. That can be quite a challenge, but the effort is warranted, says Full, “because [intermittent locomotion] truly represents the way the animal moves.”

    • *The annual meeting of the Society for Integrative and Comparative Biology, Atlanta, 5 to 8 January.

  22. Tracking the Movements That Shape an Embryo

    1. Gretchen Vogel

    Biologists studying a key moment in development are at last beginning to link genetic signals with the physical changes that create an embryo

    The most important time in your life, says embryologist Lewis Wolpert of University College London, “is not birth, marriage, or death, but gastrulation.” Gastrulation, which in humans happens about 2 weeks after egg meets sperm, is a massive rearrangement of the embryo that transforms a relatively uniform ball of cells into a multilayered organism with a recognizable body plan. Cells stream across the embryo in a precise choreography that is strikingly similar among organisms from flies to fish to people. Although cell movements are crucial at many other times in development—and sometimes involve longer journeys (see sidebar)—if the intricate dance of gastrulation goes awry, the resulting defects are usually so catastrophic that the embryo dies.

    Just what causes the cells to move and guides them to their designated places has fascinated—and frustrated—embryologists for more than a century. Developmental geneticists have fingered dozens of genes involved in controlling gastrulation, but most of them code for signaling molecules, which switch other proteins on or off. And although cell biologists have made progress in understanding how individual cells move, connections between the two fields have remained elusive.

    In the past few years, however, scientists have begun to bridge the gap. They are at last linking genetic signaling cascades to molecules that actually affect the movements of gastrulation, including those that cause cells to stick together and those that promote movement.

    Although there's a long way to go before scientists fully comprehend gastrulation's remodeling, these new findings are injecting a sense of optimism into the field. Knowledge of cell movement in development “is about to really explode,” says University of California (UC), Berkeley, developmental biologist Richard Harland. “In a couple of years time, there's going to be a quantum difference in our understanding.”

    The first solid clues about the forces that drive gastrulation came a decade ago, in groundbreaking work by developmental biologist Ray Keller of the University of Virginia and his colleagues. Using fluorescent dyes and video microscopy, his team for the first time discerned the shape changes and movements that living embryonic frog cells undergo during gastrulation. Most classical embryologists had guessed that the rearrangements of gastrulation arose from cell division—that certain cells divide faster than others and change the embryo's shape. But Keller and his colleagues revealed a far more active process, in which cells constantly shift places. They described a pattern of “convergent extension,” in which cells converge on the embryo's midline (the precursor of the backbone) and stay there. As midline cells crowd together, they push each other toward the future head and tail, and the embryo lengthens. The cells seem to move by a process called intercalation, in which cells grab onto their neighbors and use each other as a sort of moving ladder to haul themselves toward the midline. Researchers have since observed similar cell movements in organisms from flies to mice to humans.

    In frogs, gastrulation begins in the region called the “organizer,” which, among other things, directs certain cells to tuck inside the relatively hollow embryo and begin to form various layers. Researchers have therefore looked among the proteins expressed in the organizer for the elusive factors that trigger cells to move. In late 1998, a team led by developmental biologist Eddy De Robertis of UC Los Angeles came up with a promising candidate: A molecule called paraxial protocadherin, or PAPC, that is expressed in both the organizer and in the cells of the developing trunk that undergo convergence and extension. PAPC, like other proteins in the protocadherin family, has a “tail” outside the cell that helps cells stick together. So the scientists expected that it would make cells stick together.

    They were surprised to find that PAPC also prompts cell movement. Adding PAPC to so-called animal cap cells, which can undergo gastrulation-like movements in vitro, caused the cells to converge and extend. And when De Robertis's team injected a defective version of PAPC into one cell of a two-cell embryo, blocking the protein in half the embryo, the cells on that side failed to move toward the midline. By allowing cells to stick to one another and haul themselves forward, PAPC may help trigger convergence and extension, says De Robertis. The work, published in Development in December 1998, is “a very striking result,” and makes a strong case that PAPC is one of the proteins crucial for prompting this unusual cell movement, agrees cell biologist Barry Gumbiner of the Sloan-Kettering Institute in New York City.

    De Robertis's group, in work with Charles Kimmel of the University of Oregon, Eugene, and Sharon Amacher of UC Berkeley, was also able to add an important connection to the PAPC pathway. The researchers found that in fish, the PAPC protein showed up in the same pattern as a gene called spadetail, which codes for a transcription factor that turns on other genes. In embryos lacking spadetail, papc is not expressed and trunk cells fail to move toward the midline. Thus the researchers propose that spadetail somehow turns on PAPC, which in turn allows cells to journey to the midline. This work provides one of the first links between a transcription factor crucial to gastrulation and a molecule that changes cell behavior, Amacher says.

    Before cells can move at all, they must first loosen the adhesives holding them together. Gumbiner studies a family of such protein adhesives called cadherins, which, like the related protocadherins, protrude from the cell surface and act as hooks and grapples, allowing cells to stick to each other. In experiments with cultured embryonic frog cells, he has found that the protein activin, which plays numerous roles in gastrulation, weakens the cadherins' grip and allows cells to move. Last year, Gumbiner and his colleagues developed an antibody that reactivated a protein called C-cadherin even in the presence of activin. The effect was dramatic: Although all the genes characteristic of mobile cells turned on, the cells did not move. That suggests that C-cadherin acts something like a parking brake that must be lifted to let cells move, and that it is the final molecule—the one that gets the job done—in a chain of signals.

    New work on a protein called Snail shows that other members of the cadherin family are also key players in gastrulation. Like spadetail, Snail is a transcription factor that is required for certain cells to move during gastrulation, but researchers have had few clues about the proteins Snail regulates. In the February issue of Nature Cell Biology, teams led by Angela Nieto at Instituto Cajal in Madrid, Spain, and Antonio Garcia de Herreros at Universitat Pompeu Fabra in Barcelona reported that Snail temporarily turns off the E-cadherin gene, which codes for a protein that helps epithelial tissues such as skin hold together. The result supports the idea that cadherins stop cells from migrating as they do during gastrulation, says Gumbiner.

    That work may have applications to life beyond gastrulation as well: Many tumor cells, especially those with an ability to travel to new parts of the body, lack normal levels of E-cadherin. The researchers hope that drugs that block Snail might make tumor cells more sticky and less likely to spread.

    As they try to link networks of signaling proteins to the molecules that trigger movement, scientists are getting help from new imaging techniques. For example, a team led by Kimmel and Richard Adams of Oxford University has developed computer-generated, time-lapse movies that trace the paths of individual zebrafish cells during gastrulation, which can pinpoint the motions that go awry in mutant embryos. In embryos with the no-tail mutation—which fail to develop a notochord or tail—cells begin to move at the normal time but seem to lose their way and do not gather at the midline, apparently because they lack a key molecule that controls the cells' compasses.

    Such technologies, combined with new insights from molecular biology, bode well for solving the long-standing puzzle of gastrulation, says Kimmel. “Technically we are able to do so much more than a few years ago,” he says. “It's a fantastic world ahead. We're right on the edge of some wonderful stuff.”

  23. Neural Crest's Joyride Through the Embryo

    1. Gretchen Vogel

    While the movements of gastrulation dramatically reshape the embryo (see main text), many of the cells involved travel only a short distance. In contrast, cells known as neural crest make epic journeys, from the back of the developing brain as far as the length of the gut and the ends of developing limbs. “When it comes to cell migration, [neural crest] is king,” says developmental biologist Scott Fraser of the California Institute of Technology in Pasadena. His studies of these odysseys suggest that these cells' social behavior—their interactions with their neighbors—have more sway over their final destinies than expected.

    The collection of cells that make up the neural crest starts off in the precursor of the brain and spinal cord, the neural tube. Eventually the cells migrate throughout the embryo and adopt a variety of guises, becoming the bones and connective tissue of the lower face as well as the peripheral nerves that stretch throughout the body.

    For years, scientists have followed the cells' travels by tracing the fate of cells transplanted from one embryo to another or by examining pieces of embryonic tissue kept alive in culture dishes. But Fraser and postdoc Paul Kulesa have now found a way to watch cell movements “in ovo”—in a living egg.

    In one of the more ingenious uses of Teflon, the scientists cut a window into a chicken eggshell, label certain cells with fluorescent dye, and then seal the hole with a clear Teflon membrane. The membrane lets oxygen in but prevents the egg from drying out, allowing the team to observe cell movements inside the embryo for 3 to 5 days—two to three times longer than before.

    Previous work had led scientists to believe that cells in the neural crest behave like workday commuters on a subway system: Before the cells set out, scientists thought, they were programmed to take regular routes to a certain destination. Cells from different parts of the neural crest seemed to group into distinct “streams” that led to specific targets—the jaw, say, or the nerves of a developing limb. But the new work, published in the March issue of Development, makes it “pretty clear that the cells haven't read most of those papers,” says Fraser.

    Instead, Kulesa and Fraser's observations suggest that these cells behave more like teenagers on a joyride, relying on cues from neighboring cells along the way to decide their route and final destination. Individual cells change routes midway and jump from one stream to the next. In some cases, cells form long filopodia—extensions of cytoplasm—that reach out to touch cells in another stream, and in a few cases the body of the cell followed. These data suggest that a cell's neighbors and the signals it detects from them may be more important than the particular genes turned on before it sets out, says Fraser.

    The newfound ability to peer into a living embryo offers new insights into cells' travel habits, agrees developmental biologist David Wilkinson of the National Institute for Medical Research in London. “Now that we can really see how cells are behaving, it gives new ideas about the responses cells are giving each other,” he says. “Things are less rigid than people had thought.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution