News this Week

Science  11 Jan 2002:
Vol. 295, Issue 5553, pp. 246
  1. CLINICAL TRIALS

    Proposed Rules Aim to Curb Financial Conflicts of Interest

    1. Jocelyn Kaiser

    It doesn't go quite as far as “Just say no.” But a new policy adopted by the Association of American Medical Colleges (AAMC) urges universities to disqualify investigators from conducting clinical trials if the scientists hold a significant financial interest in companies that could be affected by the results.

    The guidelines,* issued last month, are the latest response to the 1999 death of Jesse Gelsinger, a volunteer in a clinical trial at the University of Pennsylvania in Philadelphia, in which the university and one of the scientists had equity in a company that was expected to benefit from the research. The incident rekindled public concern that ties between academia and industry are clouding the objectivity of clinical research and compromising patient safety. Since then, several journals and professional societies have weighed in, and the U.S. Department of Health and Human Services (HHS) has begun revising its 1995 rules designed to prevent conflicts of interest from influencing studies (see timeline). The HHS guidelines are expected to be finalized later this year.

    But most universities would prefer to police themselves, and AAMC, which objected to the HHS draft proposal (Science, 16 March 2001, p. 2060), has now come up with guidelines that are an attempt to do just that. AAMC leaders are now encouraging members to incorporate the guidelines in their own rules. “The hope is that we have taken the issue on responsibly and that [HHS] won't be compelled to come up with more Draconian measures,” says Harvard Medical School dean Joseph Martin, who in November 2000 convened a panel of academic leaders that fed into the findings of the task force.

    The AAMC guidelines would apply to all human subjects research, not just studies funded by the National Institutes of Health or other federal agencies. They recommend that each institution establish a conflicts-of-interest committee—not just one official as the Public Health Service now requires—to review and “manage” financial ties of clinical researchers. AAMC says the committee should have “clear channels of communication” with an Institutional Review Board (IRB), the ethics review board that approves protocols, to which it should give summaries of its conflict-of-interest review. The policies should also require researchers to share the existence of significant financial conflicts with journal editors and research subjects on consent forms.

    The task force's toughest call, says panel chair William Danforth, was to include the “rebuttable presumption” that individuals with a significant financial interest in the outcome of their work “may not conduct such research.” “The wording of that was very careful and very much debated,” says Danforth, chancellor emeritus of Washington University in St. Louis, Missouri. “This does represent a big shift,” says Mildred Cho, a bioethicist at Stanford University. As for what constitutes a significant financial interest, the AAMC report adds a few new items, such as any equity in a non-publicly traded company, to the Public Health Service's threshold of $10,000 a year in payments or 5% equity. (The Food and Drug Administration's levels are higher.) The policy allows for narrow exceptions, says Danforth, such as studies of medical devices in which the involvement of surgeons is often crucial. Those cases would require “effective management of the conflict” and “credible oversight” by monitoring bodies, the report says.

    Many universities already have such procedures, “but I'm not sure everybody does everything,” says gene therapy researcher Savio Woo of Mount Sinai School of Medicine in New York City, a member of the AAMC task force. One major weakness, according to a recent critical report by the General Accounting Office, is that data are kept “in multiple offices, files, and formats” and not necessarily shared with the IRB that must make the final decision. “Our guidelines are very detailed about the process,” says AAMC's Jennifer Kulynych.

    Not everyone is happy about the proposed new guidelines, however. Task force member Susan Hellman, chief medical officer for Genentech in South San Francisco, declined to endorse them mostly on the grounds that they could impede innovation. And even supporters worry that implementing them will be expensive. “There's no doubt that institutions are going to need more resources and more bureaucracy,” says Danforth. Adds Martin, “Nobody out here has proposed how to pay for this increased scrutiny.”

    The association plans to issue a second report this year on institutional conflicts. The next few months may also see guidelines from the HHS Office for Human Research Protections, whose chief, Greg Koski, says that AAMC's recommendations are “by and large very consistent” with what HHS is contemplating.

    View this table:
    • *Protecting Subjects, Preserving Trust, Promoting Progress: Policy and Guidelines for the Oversight of Individual Financial Interests in Human Subjects Research, www.aamc.org/members/coitf

  2. HIGH-ENERGY PHYSICS

    Repairs Weakened Neutrino Detector

    1. Dennis Normile

    TOKYO–A single cracked photomultiplier tube apparently triggered the devastating accident on 12 November 2001 that has closed Japan's Super-Kamiokande neutrino detector for at least a year. An investigation into the accident has confirmed early suspicions about the sequence of events that destroyed about 7000 of the observatory's 11,000 light-detecting sensors (Science, 23 November 2001, p. 1630).

    The $100 million detector has produced convincing evidence that neutrinos have mass, contrary to decades of theoretical predictions. The wispy particles cannot be observed directly, however, so the 39-meter-diameter, 41-meter-high observation tank is filled with water and lined with photomultiplier tubes that can catch a distinctive glow, known as Cerenkov radiation, produced when neutrinos smash into atomic particles in the water. Last summer, for the first time since the facility was completed in 1996, the water was drained so some 100 burned-out tubes could be replaced. The tank was being refilled when one of the tubes imploded and started a chain reaction that destroyed almost all of the submerged tubes.

    Fix-it fiasco.

    Repairs to the photomultiplier tubes triggered a costly accident when the tank was refilled.

    CREDIT: ICRR

    By analyzing the sequence in which the sensors stopped sending signals, the investigators narrowed down the initial break to one of two tubes–one original, one a replacement–on the floor of the tank. To make the repairs, technicians stood on thick Styrofoam pads placed directly atop the tubes, after determining that the tubes were capable of withstanding the stress. Examining that assumption, investigators applied eight times the load calculated to have been imposed during the repair operation on an array of 12 tubes. One of these tubes subsequently broke at its neck when subjected to a water-pressure test. This result “hints” that the neck of the original tube could have been weakened by the repair work, the report concludes, although the replacement tube might also have been damaged during handling or installation.

    To test the theory that a single imploding tube could destroy thousands of others, the investigating team three times submerged an array of nine tubes and deliberately punctured the central tube. Each time, the shock wave resulting from the implosion broke all the surrounding tubes. Yoji Totsuka, a professor at the University of Tokyo's Institute for Cosmic Ray Research and director of the observatory, says the team plans to test whether acrylic housings for the tubes will contain the shock wave and prevent a chain reaction. They are also working with the manufacturer to develop more shock-resistant tubes.

    The Japanese and U.S. project scientists running the experiment must now convince a University of Tokyo committee that they understand the causes of the accident well enough to prevent it from recurring. If they do, scientists hope to resume some observations within a year using a limited number of photomultiplier tubes. But bringing the facility back to full strength could take 5 years and cost between $15 million and $25 million.

  3. OLDEST ART

    From a Modern Human's Brow--or Doodling?

    1. Michael Balter

    Archaeologists in South Africa have found what may be the oldest known art, dated at least 40,000 years before the earliest cave paintings in Europe. The artifacts, two chunks of red ochre engraved with geometric crosshatches, were recovered from 77,000-year-old cave deposits. It's unclear what the ancient artist meant the marks to represent. Nevertheless, some researchers argue that the find in Blombos Cave, published online by Science on 10 January (http://www.sciencexpress.org/), strengthens the case that modern human behavior arose much earlier than previously thought and that it took root in Africa long before spreading to Europe. Others caution against drawing sweeping conclusions from what may be a relatively rare find.

    Ochre oeuvre?

    Researchers claim these engravings are evidence of symbolic representation.

    CREDIT: C. HENSHILWOOD ET AL.

    Most experts believe that Homo sapiens arose about 130,000 years ago in Africa, when anatomically modern humans debut in the fossil record. But scientists have been puzzled by the seemingly long gap between when humans began looking modern and when they started acting modern. Until recently, there was little evidence of modern behavior—such as the use of advanced hunting and fishing techniques and the creation of elaborate tools and art or other symbolic expression—earlier than about 40,000 years ago, the start of Africa's Later Stone Age and Europe's Upper Paleolithic, when stunning cave paintings in France and Spain appeared.

    Since 1993, however, a team led by archaeologist Christopher Henshilwood of the South African Museum in Cape Town has been unearthing at Blombos Cave what it believes is proof of modern behavior during the Middle Stone Age period, 250,000 to 40,000 years ago. In the December 2001 issue of the Journal of Human Evolution, for example, the team described a cache of elaborately worked bone points—which many researchers consider evidence of the ability to visualize a complex form—found in layers older than 70,000 years. But the ochre engravings, unearthed in 1999 and 2000, could be the best evidence yet that humans were capable of symbolic representation that long ago. The smaller piece, 53 millimeters long, has a series of X-like crosshatches, some struck through by a horizontal line. The larger chunk, about 76 mm long, features many X's traversed by three horizontal lines.

    “This is clearly an intentionally incised, abstract geometric design,” argues anthropologist Stanley Ambrose of the University of Illinois, Urbana-Champaign. “It is art.” French cave art expert Jean Clottes is more circumspect. Although “the geometric design is fully deliberate … and shows a desire to achieve symmetry,” Clottes says he is “far from sure” that “it is an incontrovertible instance of symbolic behavior … it could also be a kind of doodling.” What's not seriously in dispute is the 77,000-year date, pegged to charred stone tools in the same soil layer and sand grains in an overlying dune.

    Although many researchers are willing to grant the Henshilwood team's claim that the artist intended to symbolize something, few are ready to embrace a radical new chronology for the spread of modern behavior. “I have a bit of trouble with the argument that this is now the evidence to displace all claims for the earliest modern behavior elsewhere,” says anthropologist Meg Conkey of the University of California, Berkeley. Even if symbolic representation did arise in Blombos Cave, it may have been a fluke: a flicker of insight that died with the artist. “There are at least 30 Middle Stone Age sites scattered across the continent that could be expected to show the kinds of things reported … [in] Blombos Cave,” says archaeologist Richard Klein of Stanford University. But they don't, he says, with the possible exception of a site in the Congo. Ambrose agrees: “[Blombos] remains unique in its abundance of evidence for modern behavior.”

    Henshilwood counters that more Blombos-type discoveries may well turn up at other digs in Africa. “This is just the tip of the iceberg,” he predicts. As for the 30 sites Klein refers to, he says, “most were dug in the 1920s, '30s, and '40s and were not dated properly,” and most were not well excavated.

    If Blombos Cave is an aberration, the task is to try to explain why modern behavior did not appear simultaneously across Africa. Henshilwood suggests that the cave's location overlooking the Indian Ocean—where seafood might have provided a rich diet—provides a clue. “Did those anatomically modern people who ended up in a coastal environment do better?” he asks. “This does seem to be the pattern.” The search for such patterns, some experts say, might be more important than pinpointing the precise origin of modern behavior. “These authors don't need to make big, bold claims to convince us that what they have is important,” says Conkey. “The interesting question is not so much, ‘Is this the earliest?’ but ‘Why did it happen here?‘”

  4. EVOLUTIONARY BIOLOGY

    Finches Adapt Rapidly to New Homes

    1. Elizabeth Pennisi

    Birds of a feather don't necessarily stick together. A study of house finches has demonstrated that in just 30 years, finches newly settled in Montana and Alabama begin to look and act quite different from each other, despite being close kin. Alexander Badyaev, an evolutionary ecologist at Auburn University in Alabama, and his colleagues have also shown that these flourishing avian pioneers improve their chances of success in part by controlling the sex of their eggs as they lay them. In this way, mothers influence the size of their offspring, an important survival trait.

    The new work, reported on page 316 of this issue of Science, shows that “the time scale of decades [not centuries] is really enough for animals to evolve,” notes David Reznick, an evolutionary biologist at the University of California, Riverside. “The idea that the [divergence] could be that rapid is really remarkable,” adds Ben Sheldon, an evolutionary biologist at Oxford University, United Kingdom.

    Urban invader.

    Labeling eggs by birth order helped explain the house finches' (above) widespread success.

    CREDITS: A. BADYAEV ET AL.

    By adjusting rapidly to their new habitats, the finches “reduced mortality substantially in their young,” enabling them to outcompete native species, adds Craig Benkman, an evolutionary ecologist at New Mexico State University in Las Cruces. The enhanced survival that resulted “could easily have been sufficient to make a difference between [this species] spreading or not,” explains William Sutherland, an evolutionary biologist at the University of East Anglia, United Kingdom—and spread they did.

    The house finch, Carpodacus mexicanus, calls California and deserts in the U.S. Southwest home, but in the early 20th century these birds were also marketed as pets along the East Coast. When sales were outlawed in 1939, pet store owners in New York released their house finch stocks, not realizing how successful these birds would be in that environment. Now, just 60 years later, “it's one of the most numerous urban bird” in much of the eastern United States, says Badyaev, who wanted to know how the birds could adapt so quickly to diverse environments.

    From New York, the finches headed south, reaching Alabama about 25 years ago; California birds moved into Montana at about the same time. Immediately, differences in climate began to affect the two populations. Badyaev and Auburn colleague Geoffrey Hill tagged thousands of birds at each site and followed their offspring from hatching through adulthood. Over several years, they also looked at how many birds survived winters and how many offspring the tagged nesting pairs produced.

    “Males and females grow differently both within and between populations,” Badyaev found. In Alabama, males grow faster than females and have wider bills and longer tails, whereas in Montana, females grow faster and are bigger overall.

    These diverse features result from differential growth patterns in the young, says Badyaev. And those growth patterns indicate that selection for particular adult traits has influenced development, he adds. In addition to climate influences, he suspects that lifestyle differences between the sexes in either state contributed to the differences between males and females and, subsequently, the two populations.

    Badyaev then looked into what mechanism might be responsible for altering the growth patterns at each locale. Researchers have long known that female birds can control the sex of their offspring. And others have shown that in some bird species, the order in which eggs are laid and subsequently hatch influences the size of the resulting adults, with the first hatchlings tending to grow to be the biggest of the bunch. Badyaev found both factors at work in the finches. Alabaman females lay males first; the final egg laid is female. The opposite is true in Montana. Thus in Alabama, males get a jump on their nest mates and grow bigger, whereas in Montana, females have the growth advantage.

    “Quite a lovely result,” says Sheldon, who, like Reznick, is impressed that Badyaev carried out experiments to confirm his field observations. By switching eggs in one nest with eggs in others, Badyaev and his colleagues reaffirmed, for example, that the order in which the eggs were laid was most important in determining the relative size of the chicks—more so than, say, competition among nest mates. Overall, by biasing the sex of the eggs and laying them in a particular order, the mother increased chick survival by 10% to 20% over chicks from eggs laid in no particular order, they report. Thus adaptation along different trajectories helped make these finches successful in both states.

  5. ENDANGERED SPECIES

    Fur Flies Over Charges of Misconduct

    1. Erik Stokstad

    Amid cries of “malfeasance of the highest order,” two federal agencies have launched investigations into the actions of seven federal and state biologists 2 years ago. The Washington state legislature and the U.S. Congress are also poised to hold hearings. The concern? That the biologists deliberately tried to skew the results of a federal survey of the threatened Canada lynx in national forests. The biologists, most of whom have not been identified, have denied the accusations, according to The Washington Times, which broke the story on 17 December.

    Lynx lair.

    Critics charge that a study of lynx habitat was skewed, but scientists say they were just testing the lab.

    CREDIT: JEFF LEPORE/PHOTO RESEARCHERS

    The survey of 16 states and 57 national forests, which started in 1999, is designed to guide land management plans by determining where lynx reside. To search for the elusive animal, scientists collect hair left on rubbing posts and then send the samples to a lab for DNA analysis. The survey, coordinated by the Forest Service with help from the U.S. Fish and Wildlife Service and state agencies, has controversial implications: Efforts to protect the lynx could limit timber salvage operations—lynx make their dens in fallen trees—or conceivably prevent expansion of snowmobile areas.

    In fall 2000, a Forest Service employee reportedly told superiors about irregularities in the survey protocol. The following February, the service hired an independent investigator. Four months later, according to The Seattle Times, the investigator concluded that although the biologists had deviated from the protocol, they were not trying to skew the results. “The integrity of the overall lynx sampling effort is being maintained,” wrote the Forest Service in a 13 December memo requested by Congress.

    But some in Congress are not convinced. Not only have Representatives James Hansen (R-UT), chair of the House Resources Committee and an advocate of land rights, and Scott McInnis (R-CO) scheduled a hearing for next month, but they have asked the General Accounting Office, the investigative arm of Congress, to probe the incident.

    The Forest Service isn't commenting on the incident or its earlier investigation, citing the inspector general's probe, other than to say that the three Forest Service employees are no longer participating in the survey.

    But according to Tim Waters, a spokesperson for the Washington Department of Fish and Wildlife (WDFW), part of the flap involves two WDFW biologists who participated in the survey. They sent in fur from a captive lynx and a stuffed bobcat as control samples. Jeff Bernatowicz, one of the WDFW biologists, told Science he wanted to check whether the lab could correctly identify lynx hair. There was reason for concern, he says, because another lab's analysis from an earlier survey had erroneously indicated that lynx were present in Oregon. But, Waters points out, controls weren't called for in the protocol, nor did the biologists notify other survey researchers about their actions.

    If that's the case, lynx scientists say, the biologists' actions are not to be condoned but are hardly a major offense. “I don't think it's such a big deal,” says Richard Reading, director of conservation biology at the Denver Zoological Foundation and co-chair of Colorado's Lynx and Wolverine Advisory Team. “They only needed to inform their superiors.”

    Yet that misstep may have cost the broader effort its credibility. As Forest Service Chief Dale Bosworth conceded in a statement, the scientists' actions “have called into question the scientific integrity of the interagency survey.”

  6. CANCER RESEARCH

    Will Bigger Mean Better for U.K. Charity?

    1. John Pickrell*
    1. John Pickrell is a science writer in Hertfordshire, U.K.

    HERTFORDSHIRE, U.K.—After a long and sometimes tense courtship, the United Kingdom's two major cancer charities are ready to unite next month to form a giant funding agency similar to the U.S. National Cancer Institute (NCI). Cancer Research UK, which will be the world's biggest nongovernmental cancer research organization and the United Kingdom's largest fund-raiser, is expected to spark new collaborations at the often-frustrating nexus of basic and clinical research: turning promising test tube findings into experimental therapies.

    Substantial dowries.

    Research funds have grown steadily for the U.K.'s two largest cancer charities.

    CREDIT: CRC/ICRF

    British cancer researchers are hoping that the recipe for happiness within Cancer Research UK, as in many successful marriages, will be the complementary strengths of the partners. The Imperial Cancer Research Fund (ICRF) is a basic research powerhouse that mostly supports in-house labs, whereas the Cancer Research Campaign (CRC) focuses on prevention, treatment, and diagnostic research through extramural grants and at a handful of clinical units it underwrites.

    Andrew Miller, interim chief executive for Cancer Research UK, says it didn't make sense for the two giants to compete for donations rather than collaborating. The charities had raised the bulk of their funds by vying for legacies and other private donations as well as corporate sponsorships. Both also run national networks of shops that sell goods such as secondhand clothing, bric-a-brac, and books. ICRF's 450 stores, staffed by an army of retirees, raise about $9 million a year—often in direct competition with CRC's 270 secondhand shops. “If someone came from outer space and examined this,” says Miller, referring to the competition between the charities, “they would think it was a very daft situation.”

    An alien visitor next month may not find a land of milk and honey: Cancer Research UK's $189 million budget in 2002, although a third larger than the government's total spending this year on cancer research, is more than an order of magnitude smaller than NCI's budget. Still, an outsider would detect considerable enthusiasm for the new entity. The merger “is a very positive step,” says Nick Lemoine of ICRF's molecular oncology unit at Imperial College in London. In the months since the merger plans were announced (Science, 26 January 2001, p. 575), Lemoine has had ample time to contemplate working more closely with CRC colleagues on gene therapy and other projects. And as a bittersweet bonus for their efforts, the 3000 scientists at Cancer Research UK can anticipate an extra $20 million or so after the elimination of 130 managerial and support jobs. Miller says the liberated funding will allow the organization to hire more researchers and boost grants in 2003.

    One lingering concern in the current CRC-supported labs is that ICRF's core strengths will guide the research agenda—especially because ICRF director-general Paul Nurse, a 2001 Nobel laureate in physiology or medicine, will be Cancer Research UK's scientific chief. Nurse could not be reached for comment. Miller, however, has pledged that most research areas will be retained and that funding committees will consist of CRC and ICRF researchers in equal measure. In addition, the combined charity will remain part of a nascent coordinating body, the U.K. National Cancer Research Institute. Pressure at Cancer Research UK will come from having to do more, not less: Scientists at both ends of the research spectrum will be encouraged to team up on “translational” projects, in which the fruits of fundamental research are used to create experimental therapies.

    Observers expect that Cancer Research UK will have an easier time wooing donors than the two charities had as swinging singles. “There were concerns early on that one plus one would not equal two, but market research seems to suggest that merging could in fact raise our profile,” says Neil McDonald, a structural biologist at ICRF's headquarters in London. Indeed, predicts Steve Jackson, deputy director of the Wellcome-CRC Institute of Cancer and Developmental Biology at the University of Cambridge, Cancer Research UK should have more clout among both politicians and scientists. Coupled with a solid research program, that would amount to far more than a marriage of convenience.

  7. ASTHMA RESEARCH

    Missing Gene Takes Mice's Breath Away

    1. Gretchen Vogel

    A strain of mice with a tendency to wheeze may help scientists get closer to the root of asthma in humans. Asthma constricts airways in patients' lungs and leaves them short of breath, sometimes fatally; it afflicts tens of millions of people worldwide and seems to be on the rise in many areas. Scientists are still struggling to decipher the cellular signals at the root of the attacks. Now, in reports on pages 336 and 338, immunologist Laurie Glimcher and her colleagues describe a mouse strain that mimics the human condition and might provide a better model system for studying the disease.

    Proper balance.

    The T-bet gene prompts immature immune cells to become TH1-type cells, keeping the number of TH2 cells in check.

    Researchers have created asthmatic mice before, but through a process involving injections of allergens and irritants. That scenario doesn't match the situation of many patients with chronic asthma, whose attacks are not triggered by known allergens. The new mice resemble those human asthmatics in several key ways, such as having characteristic chronic lung inflammation and thickened airway walls. What's more, the gene responsible for the mice's affliction appears to be misregulated in human asthmatics as well. “It's an exciting model,” says immunologist William Paul of the National Institute for Allergy and Infectious Diseases in Bethesda, Maryland.

    The engineered mice lack a gene called T-bet, which codes for a transcription factor, a protein that controls the expression of other genes. In previous experiments, Glimcher and her colleagues had shown that T-bet affects the development of immune system cells, encouraging the development of so-called TH1 cells. These cells help organize attacks on unfamiliar cells such as invading microbes. They also make proteins that discourage overproduction of a sister cell type called TH2 cells, whose main job is to help the body defend against parasites.

    In most people, a complex set of feedback loops keeps the two cell types in balance. But many scientists suspect that in asthma patients something skews the balance, allowing TH2 cells to predominate. The proteins those cells produce can lead to some of the changes seen in asthma patients' airways, including high numbers of trigger-happy immune cells that spark inflammation.

    To find out more about T-bet's role in the immune system, the team created mice lacking the gene. As expected, the mice produced fewer TH1 cells. Immature immune cells taken from the animals' lymph nodes produced very little interferon γ, the TH1 cells' chief protein product, compared to their wild-type littermates. The cells also produced higher levels of interleukin-4 and interleukin-5, two products of TH2 cells–and prime suspects in fueling asthma.

    The animals' lungs resembled those of chronic asthma patients, with unusually thick layers of collagen and extensive networks of the musclelike cells that constrict airways. And even before exposure to an irritant, the animals' lungs showed signs of inflammation: They had significantly more immune system cells called eosinophils and lymphocytes than their littermates with functioning T-bet. Mice lacking T-bet were also extremely sensitive to the irritant methacholine; their airways narrowed and it took more effort to breathe. Although it is difficult to really hear a mouse wheeze, Glimcher says, “these mice have asthma.”

    T-bet might play a role in human asthma as well. The researchers found that asthma patients had significantly lower levels of T-bet expression in their lungs than people without asthma. Although the genetic causes of asthma are complex, the T-bet gene is in a region of the genome that has been implicated in asthma susceptibility.

    The mice will be especially useful for fingering the proteins that interact with T-bet to encourage the development of TH1 cells, says asthma specialist Jack Elias of Yale University School of Medicine. Such proteins might help scientists track down the still-mysterious cause of asthma. Although any treatments are years away, Glimcher says there may be ways to tweak the T-bet system in human lungs to discourage asthma attacks. Any such hints should help asthma patients breathe a little easier.

  8. STEM CELL RESEARCH

    Stem Cells May Shore Up Transplanted Hearts

    1. Caroline Seydel*
    1. Caroline Seydel is a freelance science writer in Los Angeles.

    Can a broken heart be mended? Perhaps, says a new report, which shows that after a heart transplant, cells migrate to the donated organ, possibly helping it recover. These migrants show signs of being stem cells, those multitalented cells that have the capacity to develop into a multitude of tissues.

    Some parts of the body, such as the skin, regenerate readily when damaged. But “we all thought that once you lose a chunk of heart, it's gone,” says cardiologist Roberto Bolli of the University of Louisville in Kentucky. One of the first indications that the heart can bounce back came in July 2001, when researchers reported that heart muscle cells can divide after a heart attack. Transplanted hearts are often similarly damaged: Many heart cells die during the hours the organ is out of the body.

    Help from afar?

    A cell containing a Y chromosome (arrow) has taken up residence in heart tissue from a female donor.

    CREDIT: F. QUAINI ET AL., NEJM 349, 8 (2002)

    Cardiovascular researchers Federico Quaini and Piero Anversa of New York Medical College in Valhalla and colleagues at the University of Udine, Italy, wanted to find out whether the transplant recipient's body pitches in to help heal the new organ. The team examined eight hearts transplanted from female donors into male patients. Up to 10% of cells in the transplanted hearts contained the male Y chromosome—a clear sign that cells from the recipient had taken up residence in the new heart, the group reports in the 3 January issue of The New England Journal of Medicine.

    Cardiologist Philip Binkley of Ohio State University, Columbus, calls the study an “ingenious and novel demonstration that the heart can recruit new cells that may be a benefit in remodeling the heart and possibly improving cardiac function.” Several types of cells appeared to regenerate, including cardiac muscle, smooth muscle, and endothelium, Anversa says. Cells containing the Y chromosome appeared “perfectly indistinguishable” from neighboring cells that lacked a Y.

    Suspecting that healing the heart might be a job for stem cells, Anversa's group then searched the heart tissue for three molecular markers characteristic of the versatile cells. They found cells bearing these markers both in transplanted and control hearts, suggesting that undamaged hearts harbor populations of such cells. The transplanted hearts contained even higher numbers of the cells, some of which came from the donor and others from the recipient, suggesting that the heart recruits stem cells from other parts of the body to aid in regeneration.

    But it is unclear whether these are bona fide heart-specific stem cells and, if so, exactly how they promote regeneration of heart tissue. Nor do the researchers know where the cells originate or how they migrate. Before they can claim to have found stem cells, Anversa says, the team must isolate the cells and demonstrate in vitro that they are self-replicating and capable of differentiating into many types of tissue—work that is now under way.

    Discovering how the heart might repair damaged tissue could have enormous, if distant, implications for treating heart disease. But as yet it's not even clear that the new cells help the transplanted organ, Binkley cautions. “Having recipient cells enter the [heart] could have certain detrimental effects,” Binkley says, such as clogging up blood vessels. Only additional studies, he says, can determine if such cells are more balm than bane.

  9. U.S. BUDGET

    Spending Triples on Terrorism R&D

    1. David Malakoff

    NASA may be best known for sending a man to the moon. Now it wants to show that it's no slouch when it comes to fighting terrorists. The U.S. space agency is one of 11 federal agencies (see pie chart) to share in a record $1.5 billion that Congress has showered on terrorism- related R&D for 2002 in response to the 11 September and anthrax attacks. The money—nearly triple last year's spending—will be used for everything from new laboratories for studying potential bioweapons to developing hacker-proof computer systems.

    Spreading the wealth.

    Eleven U.S. agencies will share a record $1.5 billion in terrorism-related research funds.

    The money is a windfall for researchers, who earlier last year were fighting to overturn proposed cuts in many terrorism-related R&D budgets. Even after the attacks, the White House paid scant notice to research in its $40 billion emergency recovery package that was approved by Congress. But the Senate took up the cause with a vengeance, labeling as urgent more than $800 million in new terrorism-related R&D projects. “We heard from everyone—university scientists, industry, [federal] researchers—about things they could do to reduce the threat if only they had some money,” says an aide to one Senate Democrat.

    The final package, mostly inserted late last month into the 2002 appropriations for the Department of Defense, contains $711 million for R&D. That freshet of funds, when combined with spending approved earlier, will push 2002 spending on terrorism- related R&D up 157% over the $579 million spent in 2001, according to an analysis by the American Association for the Advancement of Science (publisher of Science).

    Most of the new money will be used to expand existing programs aimed at preventing terrorist attacks. The military, for example, gets a 50% increase for its multifaceted terrorism research efforts, to $353 million. The Centers for Disease Control and Prevention, which has played a high-profile role in investigating the anthrax mail attacks, gets $1 billion overall for security-related expenses, including $130 million for studying anthrax and other potential bioweapons. That 256% increase “is hopefully just a down payment for research too long neglected,” says one science society lobbyist.

    The bioterrorism budget at the National Institutes of Health will soar by nearly 500%, to $289 million. The total includes $75 million for a new highly secure laboratory to work with dangerous pathogens. The Department of Agriculture is also slated to get new laboratories; of $113 million in new funds, $73 million is set aside for an animal biocontainment facility at the National Animal Disease Laboratory in Ames, Iowa, and improvements at the controversial Plum Island Animal Disease Center in New York (Science, 26 May 2000, p. 1320). The Department of Energy will devote $78 million of its $126 million in new funds to help prevent nuclear terrorism.

    The country's space agency gets $33 million for work on two fronts: information systems that terrorists can't penetrate, and imaging systems and other technologies to better detect enemies. And the Environmental Protection Agency gets $70 million to, among other things, develop better methods to clean up after any bioweapons are unleashed.

  10. STEM CELL RESEARCH

    Rat Brains Respond to Embryonic Stem Cells

    1. Gretchen Vogel

    In the heated ethical debates over embryonic stem (ES) cells, Parkinson's disease often figures large. Advocates for more research say ES cells offer the best hope for treating or even curing this devastating and deadly disease, which gradually robs patients of their ability to move. The advocates hope that scientists will someday be able to turn ES cells into dopamine-producing cells, replacing those that are lost in the disease. So far, however, evidence that ES cells can make this switch has been limited to experiments in lab culture, not animals.

    Pot of gold?

    Mouse embryonic stem cells injected into rat brains express the AHD2 protein marker (yellow) characteristic of cells lost in Parkinson's disease.

    CREDIT: LARS BJÖRKLUND/HARVARD

    Now, in a paper published online on 8 January by the Proceedings of the National Academy of Sciences, a team of neuroscientists reports that, indeed, mouse ES cells can become dopamine-producing neurons in the brains of rats. These experiments are the first to show that the specific type of neurons missing in Parkinson's disease can develop from an ES cell in an animal's brain and lead to partial recovery.

    But the work does not mean doctors will soon be injecting human ES cells into Parkinson's patients. “This shows that it can be done, but there are big obstacles to bring this to clinical fruition,” says Ole Isacson of Harvard Medical School in Boston, who led the group. Among the caveats: Several rats developed deadly tumors, and the animals' behavioral improvements were limited.

    Isacson's approach was surprisingly straightforward: He and his colleagues simply injected untreated ES cells into the rats' brains. In previous experiments, Isacson, postdoc Lars Björklund, and their colleagues had found that undifferentiated ES cells, when injected into animals, seemed eager to become neurons. However, the cells frequently grew out of control and formed teratomas, tumorous growths comprising a mix of cell types. Isacson and Björklund suspected that the differentiating ES cells were sending conflicting signals to each other that promoted the growth and formation of the teratomas. The team decided to test whether diluting the cells would lessen the chance that the cells would interact with each other, thereby encouraging development along the possible default pathway: making neurons. The team prepared mouse ES cells in a dilute solution and injected about 2000 cells each into the brains of 25 rats. The rats had previously had their dopamine-producing neurons damaged and showed a characteristic tendency to move in circles toward the damaged side of the brain.

    Six of the rats showed no evidence that the transplanted cells survived. Five died before behavioral tests were completed and proved to have teratoma-like tumors. But 14 of the rats had surviving mouse cells in their brains 4 months after surgery. All of the surviving grafts contained at least some dopamine-producing neurons. And many of those neurons expressed a protein marker called AHD2, a marker typical of the specific kinds of neurons lost in Parkinson's disease.

    “What this work shows is that you can easily get dopamine-producing neurons in the brain,” even from undifferentiated ES cells, says developmental neurobiologist Ron McKay of the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland.

    And the new neurons seem to have reduced Parkinson-like symptoms in the animals. The scientists observed a gradual decrease in the abnormal rotations; by 9 weeks following the surgery, the 14 rats with surviving ES cells had improved by an average of 40% over their pretransplant state. Rats that received sham surgeries showed no improvements. But Anders Björklund of the University of Lund in Sweden cautions that the functional effect is very small. He notes that other transplantation experiments using dopamine-producing neurons from fetal brain tissue (a technique now being tested in humans) regularly produce much more dramatic results.

    To Lorenz Studer of the Memorial Sloan-Kettering Cancer Center in New York City, “the results suggest that the fewer cells you put in, the bigger the influence of the environment becomes. If you dilute them sufficiently, you can get them to disregard this tendency to cause tumors.”

    Studer adds that the relative ease with which dopamine-producing cells developed from the injected ES cells suggests that there might be a way to prompt rare stem cells already in the brain to become dopamine- producing neurons, allowing doctors to avoid the issue of transplanting cells altogether.

  11. OPTICS

    Crystal Stops Light in Its Tracks

    1. Charles Seife

    Baby-boomer superheroes, take heart: Despite your aging knees, you can still run faster than light. For years, physicists have been slowing light down to a crawl, and even stopping it in its tracks, by shooting laser beams into cold gases known as Bose-Einstein condensates (BECs). Now, researchers have done the same thing in a much less exotic solid. The advance may one day lead to memory devices for computers that store information on beams of light.

    The work builds on experiments Lene Hau of Harvard University performed in the late 1990s. Hau and colleagues managed to slow light down to a poky 17 meters a second—below the top speed of a bicycle. The group used lasers to poke a spectral “hole” in a BEC of sodium atoms, making it impossible for the condensate's electrons to absorb light of a certain color. Two lasers, known as the probe beam and the coupling beam, zapped the atoms in the condensate, causing their electrons to interfere with each other in ways that made it impossible for them to absorb photons of a certain frequency. As a result, the BEC became transparent to light within a narrow range of frequencies of yellow-orange light. The speed of light in a medium is related to how readily the medium absorbs light of different frequencies; sharp variations in absorption across a narrow range of frequencies dramatically slow a pulse of light. Thus, the tiny spectral hole caused an unprecedented slowing.

    Squeeze play.

    Shrinking the range of frequencies a crystal can transmit slows light to a crawl.

    Unfortunately, BECs exist only within a few hundred nanokelvin of absolute zero. Solids can exist at warmer temperatures. But in solids, unlike BECs, the laser-induced transparency trick makes too wide a spectral hole to slow light appreciably, says physicist Phil Hemmer, now at Texas A&M University. To narrow the hole, Hemmer and colleagues used two beams from a dye laser to create a large spectral hole in an yttrium crystal. A third beam increased the absorption inside the hole to create a smaller “antihole.” Happily, the coupling and probe lasers also bleach a narrow anti-antihole of transparency inside the antihole, yielding a range of transparency narrow enough to slow down light to 45 meters per second.

    Furthermore, when they shut down the coupling field, the crystal brought the beam to a halt by absorbing and storing the light, and eventually released it when the coupling laser was turned back on—a trick Hau also had performed earlier with BECs (Science, 26 January 2001, p. 566). “You preserve phase and amplitude,” says team member Alexey Turukhin, a physicist at the Eatontown, New Jersey, branch of laser company JDS Uniphase. As a result, Turukhin says, the light can store information in ways that make it suitable for quantum computing. And although the yttrium crystal must be kept at a chilly 5 kelvin, it is still much easier to handle than a BEC, an important consideration for commercial devices that store light pulses. What's more, Hau notes, light pulses shrink as they slow down, a property that might give scientists an efficient means of compressing information stored on light pulses.

  12. GEOLOGY

    Subtleties of Sand Reveal How Mountains Crumble

    1. Liese Greensfelder*
    1. Liese Greensfelder is a writer in Nevada County, California.

    Quartz grains altered by cosmic rays are forcing geologists to reevaluate which forces shape landscapes and how fast they operate

    Crowding the windowsill in Jim Kirchner's office is a heap of cotton sacks bulging with tiny pebbles, dirt, and sand scooped from a clear mountain stream in Idaho. Hidden in each of these sediment samples are a few rare atoms called cosmogenic nuclides. Kirchner intends to find and count these atoms, and when he does, he'll be able to figure out how fast the rugged landscape of central Idaho has been wearing down over the past 10,000 years.

    Kirchner, an earth scientist at the University of California (UC), Berkeley, is one of a couple dozen researchers worldwide who are using cosmogenic nuclides to study long-term erosion rates, a tool that has enabled them to paint a portrait of landscape formation with previously unparalleled resolution. Barely out of its infancy, this technology has already spawned a slew of applications, from examining the sustainability of agricultural practices to decoding signals of global climate change.

    Cosmic cues.

    Sediment in Idaho rivers (top) revealed history of surrounding watersheds (bottom), thanks to changes wrought by cosmic rays.

    CREDITS: (TOP TO BOTTOM) JIM KIRCHNER; ILLUSTRATION: C. SLAYDEN

    “The work is revolutionizing our quantitative understanding of the Earth's surface,” says Paul Bierman, a geomorphologist at the University of Vermont in Burlington. A pioneer in the emerging field, Bierman has helped develop the technique in areas ranging from the deserts of Namibia to the mountains of Tennessee. “This field is fascinating because it's still in its youth and it can still be used to tackle problems that people have never made a measurement on,” he says.

    The budding research is based on cosmogenic nuclide dating, a procedure that scientists have used for about 15 years to determine how long a chunk of matter such as a boulder or an outcrop has rested on or near Earth's surface. Such “exposure ages” have helped them pin dates on items as diverse as antarctic meteorites, ancient earthquakes in Montana, and glacial advances in Tibet.

    In theory, cosmogenic dating is simple. Cosmic rays—which are constantly bombarding the planet—occasionally collide with nuclei in the atoms of certain minerals, producing rare isotopes, or nuclides. Although the rays can penetrate a meter or two through rock or soil, most nuclide production occurs within a half-meter of the surface, decreasing exponentially with depth. The longer a rock is exposed, the more nuclides it accumulates.

    The theoretical leap from exposure ages to erosion rates came early. In 1985, two scientists at UC San Diego, Devendra Lal and James Arnold, proposed that cosmogenic nuclide concentrations in sediments should correspond to the erosion rate of the material that shed them. The team reasoned that as rock and soil erode from Earth's surface, deeper lying materials that were once shielded from cosmic rays become increasingly exposed. The slower the erosion rate, the greater the accumulation of nuclides.

    In 1995 and 1996, three studies reported by independent teams launched the cosmogenic erosion rate bonanza. The teams tested the notion that they could estimate the average long-term erosion rate of an entire catchment or watershed from concentrations of cosmogenic nuclides in quartz-rich sediment plucked out of the watercourse draining the catchment.

    Quartz is a near-perfect material for erosion rate analysis. A hard and stable crystal made of silica (SiO2), it's one of the most common minerals in the world and is abundant in rock and sand. Cosmic rays that strike quartz can transform its oxygen atoms into 10Be and silicon atoms into 26Al, unstable isotopes (radionuclides) of beryllium and aluminum. The new sampling technique regards hillsides as conveyor belts that funnel quartz and other eroded materials into streams and the waiting hands of geologists. “By picking up a handful of sand, you've collected a half-million grains that come from all over a catchment,” says Douglas Burbank, a tectonic geomorphologist at UC Santa Barbara. “It's a very potent integrator of the information and obviously far more efficient than sampling hundreds of outcrops.”

    Sediments usually flush quickly through a watershed, so the sprinkling of new cosmogenic nuclides this eroded material acquires while tumbling downhill and downstream is generally insignificant when compared with the nuclides accumulated during the much longer time span the material lies on or near Earth's surface. It's this time span that the concentration of nuclides in a sediment sample measures, and it varies from sample to sample, ranging from about 1000 to 100,000 years.

    Scientists who study the evolution of the landscape are bubbling with enthusiasm for their new tool. A key reason is that this 100,000-year range spans the era during which the forces of erosion shaped much of the modern landscape. “We've known for years how to measure how old rocks are,” says UC Berkeley's Kirchner. “What cosmogenics now allows us to do is to tell how old the hills are, how old the valleys are, how fast the Earth's surface is evolving. This is a whole class of questions we can answer much better now than we ever could before.”

    Before long Kirchner will take the sediment samples piled in his window downstairs to his laboratory, where they will be separated, crushed, boiled in phosphoric acid and then in hydrofluoric acid, and finally reduced to small, powdery flecks. Kirchner will drive these precious powders 65 kilometers south to Lawrence Livermore National Laboratory, home to one of only two accelerator mass spectrometers in the country powerful enough to extract the information Kirchner needs. The other spectrometer resides at the PRIME Lab at Purdue University in West Lafayette, Indiana.

    What happens next is testimony to extraordinary recent advances in spectrometers and the skill of the scientists who operate them. To tally 10Be atoms in each of the Idaho samples offered by Kirchner, the Livermore spectrometer hurls them down its long tunnels at speeds of up to 80 million kilometers per hour. If they had to, scientists at Livermore say, they could tune this machine to pick out as few as five atoms of 26Al or 10Be in backgrounds of 10,000 trillion (1016) atoms, although typical concentrations are 1 to 10 parts per trillion.

    Gimme shelter.

    Sediment swept under the cosmic-ray umbrella of Mammoth Cave rewrote a key sequence of Ice Age events.

    CREDITS: (TOP TO BOTTOM) DAVID MUENCH/CORBIS; ILLUSTRATION: C. SLAYDEN

    Working with such infinitesimal numbers can be tricky, however. Burbank warns that researchers must carefully choose study areas to avoid garbling the message of nuclide concentrations. Data analysis assumes that quartz is evenly distributed in the landscape, yet this is not always the case. Streambed samples ignore boulders and gravel, important products of erosion where landslides have occurred. In some catchments, a complex history of sediment burial, exposure, and reburial simply defies interpretation. Finally, Burbank cautions, cosmogenic analysis is so time-consuming and expensive that researchers may not be able to afford enough data to draw rock-solid conclusions. (It costs about $1000 to transform a sediment sample into a point on a graph, Kirchner estimates.) “You don't have the luxury of trying to test alternative interpretations rigorously,” Burbank says. “That's a tough restriction.”

    With those caveats in mind, however, researchers now have a good tool for determining how fast erosion happens. The early results contain some surprises. Kirchner was startled when the nuclide concentrations in the sediments he drew out of streams in 37 different catchments in Idaho's mountains revealed erosion rates over the past 5000 to 27,000 years that averaged a whopping 17 times higher than modern-day rates, a finding he reported in the July 2001 issue of Geology. After ruling out climate change and other factors, Kirchner concluded that the huge discrepancy must be due to catastrophic erosion events (triggered, for example, by devastating wildfires followed by massive flooding) so rare that decades of regular observations are unlikely to spot them. “What you see when you look at erosion in those mountains from day to day bears no resemblance to the long-term average,” Kirchner says. One lesson to be drawn from this study, Kirchner suggests, is that in young, dynamic mountain ranges, engineers may be greatly overestimating the time it will take reservoirs to fill with debris should one of these catastrophic events occur during the reservoir's lifetime.

    Cosmogenics can also reveal notes from underground, as in Darryl Granger's “burial dating” of gravel and sand that washed into caves thousands of years ago. Burial dating assumes that when sediments enter a cave that's deep enough to shield them from cosmic rays, they quit accumulating cosmogenic nuclides. By analyzing the slow decay of 26Al and 10Be in buried sediments from Mammoth Cave in Kentucky, Granger—a geomorphologist at Purdue University—has traced a 3.5-million-year history of the cave and the Green River. Earlier studies had unraveled the sequence of Mammoth Cave's formation, but not the timing. Granger's work tied the cave's history to the geological history of the Green River and the regional climate record. One of his several surprising discoveries was that the finger of the Laurentide ice sheet that dipped into Kentucky to form the Ohio River arrived 1.5 million years ago, 700,000 years earlier than previously thought. Granger's study was reported in the July 2001 Geological Society of America Bulletin.

    A number of researchers are using cosmogenics to tackle questions with more immediate environmental and economic consequences. In Sri Lanka, where 200 years of intensive farming have stripped much of the native rainforest from the island's highlands, a group led by Friedhelm von Blanckenburg—a geochemist then at the University of Bern, Switzerland—used the technique to measure long-term erosion rates in remnant forest patches. When the team compared these natural long-term rates with modern-day rates generated by agricultural practices, they found that cultivation had increased erosion a staggering 20- to 100-fold. The group presented its results last month at a meeting of the American Geophysical Union.*

    Meanwhile, Arjun Heimsath, a geomorphologist at Dartmouth College in Hanover, New Hampshire, is looking at the other side of the coin. By analyzing cosmogenic nuclides in sediments and bedrock, he is studying how quickly new soil is produced from the bedrock that underlies soil on hillsides. Heimsath hopes to use his work to create models that will predict how climate change and management practices affect soil production and erosion.

    By clocking and measuring long-term erosion rates, scientists can now begin to understand the forces that drive them. Some studies have produced unexpected results. Only a decade ago, for example, many geologists assumed that as the climate grows warmer and wetter, erosion and chemical weathering (the chemical breakdown of soil and rocks) accelerate. But this notion has itself been slowly eroding. Clifford Riebe, a geomorphologist at UC Berkeley, has used cosmogenics to produce some of the first quantitative evidence that challenges the old assumption. When he compared weathering rates with long-term erosion rates at seven sites that span wide climate ranges in California's Sierra Nevada mountains, Riebe found that erosion and weathering did indeed go hand in hand but that climate had little effect on either. “Given that chemical weathering gets its name from weather,” Riebe says, his results “came as a surprise.”

    Instead, Riebe discovered, erosion was swiftest on steep slopes near geologic faults or river canyons—evidence that tectonic activity eclipsed climate in driving erosion and weathering. Extrapolated to a global scale, that conclusion bolsters a 13-year-old theory that the uplift and erosion of the Himalayas—and the ensuing consumption of atmospheric CO2 by chemical weathering reactions—triggered a global cooling that began about 40 million years ago. Riebe is now roaming from the rainforests of New Zealand to the deserts of Mexico to see whether tectonics dominates weathering rates even in those extreme climates.

    As cosmogenic studies of erosion move out of the hands of specialists and into more widespread usage, new applications are blossoming. Researchers like Vermont's Bierman readily rattle off long lists of their plans and predictions for future research. “Overall, what keeps this exciting for me is [applying the techniques to] problems that we otherwise haven't been able to solve,” Bierman says. “It's not an incremental learning experience. It's a major learning experience.”

    • *AGU 2001 Fall Meeting, San Francisco, 10 to 14 December.

  13. BIOENGINEERING

    Plant Scientists See Big Potential in Tiny Plastids

    1. Josh Gewolb*
    1. Josh Gewolb is a writer in New York City.

    Tinkering with plant cells' second genome could boost photosynthesis or turn plants into drug factories

    When it comes to genetic engineering, the genes inside the nucleus get all the attention. But plants have an unassuming second genome inside tiny organelles called plastids. And although this small, circular genome carries far fewer genes than its nuclear counterpart, researchers say its potential for genetic engineering far outstrips its size.

    The plastid genome arose some 1.5 billion years ago, when the ancient ancestors of modern plants are thought to have engulfed photosynthetic bacteria and put them to work manufacturing food. Over time, many of the original plastid genes slipped into the nucleus, but a small genome with about 100 genes remains. Plants now contain undifferentiated organelles that can diversify into a number of specialized plastids, each of which carries this secondary genome. The most famous member of the plastid family is the chloroplast, the photosynthesis factory. Others include the amyloplasts, which store starch, and the oil-producing elaioplasts.

    Neon plastids.

    When inserted into the plastid genome, a gene for a fluorescent marker protein (GFP) signals a successful transformation.

    CREDIT: P. MALIGA/RUTGERS UNIVERSITY

    Because plastids present an easy way to produce proteins in high concentrations and offer unique access to the photosynthetic machinery, some plant scientists believe they offer some of the best opportunities to make transgenic crops that grow more efficiently, for instance, or that manufacture medicines. “You get high yields” of proteins produced by plastids, says emeritus Harvard molecular biologist Lawrence Bogorad. And when it comes to boosting photosynthesis, “you can probably do things in the chloroplast compartment that you can't do in the nuclear compartment,” Bogorad says. But transforming plastids is technically tricky, and the field, although growing, remains small.

    The promise of plastids

    For years, researchers have dreamed of tinkering with the genes in plants to turn them into living, photosynthesizing drug factories. If plants could be engineered to pump out lots of therapeutic proteins, these could be isolated and made into medicines. But, although creating transgenic plants by altering their nuclear DNA has become routine, it remains extremely difficult to get these plants to produce the desired protein—say, antibodies against herpesviruses or enzymes for diagnostic kits—in large quantities. In most such plants, the new protein accounts for a paltry 1% of the plant's total protein output, although levels as high as 25% have been reported in a few exceptional cases.

    Transgenic plants made with altered plastids are much more productive than nuclear-engineered plants. Last year, geneticist Henry Daniell of the University of Central Florida in Orlando inserted a gene cluster for an insecticidal Bacillus thuringiensis toxin into the chloroplasts of tobacco plants; the chloroplasts churned out vast amounts of the crystallized protein—45% of the cell's total protein output. Levels routinely reach 5% to 15% in the latest studies, says geneticist Pal Maliga of Rutgers University, New Brunswick, New Jersey.

    Engineering the plastid genome has additional advantages over nuclear transformation. For example, the risk that foreign genes introduced into plastids will spread to other plants is much lower than the risk that nuclear genes will make such a leap. This is because plastid DNA in most crop species is transmitted only from generation to generation through the ovules, the plant “egg,” not through pollen, the plant “sperm”—just as animals' mitochondrial DNA is passed down only through the egg. Plants produce thousands of tiny pollen grains that can be spread uncontrollably by wind or insects over wide distances, but seeds developing from ovules stay with the plant.

    What's more, the rules of molecular biology are different in the plastid than in the nucleus. In the nuclear genome, each gene is turned on and off by its own control sequence. That makes it difficult to engineer complex traits controlled by many genes. But in the plastid, multiple genes are controlled by the same genetic switch, as is the case in bacteria. “It's like a bacterial fermenter in a plant cell,” says research director Peter Heifetz of the Torrey Mesa Research Institute in San Diego. With plastids, he sums up, “you solve a lot of problems in one shot.”

    Technical difficulties

    But plastid genomes have often defied the best efforts to modify them. Since Maliga modified the plastid genome of tobacco more than a decade ago, scientists have been able to transform plastid genomes in just a handful of plant species. The technical knowledge required to rework plastids is spreading, but slowly.

    Shooting up.

    After researchers weed out nontransformed tomato cells, they sprout the cells with souped-up plastids.

    CREDIT: R. BOCK/UNIVERSITY OF FREIBURG

    Modifying the plastid genome is tough for the same reason that it's promising: Tens of thousands of copies of the genome may be present in any given cell. A single plastid can have hundreds of copies of the genome, and a plant cell can have hundreds of plastids. For successful transformation, the new gene must be present in each copy of the plastid genome within each cell.

    To achieve that, scientists first insert their gene of choice into a single plastid and then allow the cell to divide many times in culture. Then they apply a delicate balance of chemicals that allows cells with more copies of the gene to prosper. After months of selection—if all goes well—the culture will contain only transformed cells.

    At this point, scientists are left with a plate of undifferentiated cells that they have to turn back into a plant. Doused with the proper cocktail of plant hormones, tobacco, potato, tomato, and other plants in the nightshade family are easy to regenerate—and plastid engineering has been relatively successful in them. But many of these techniques are species-specific, and it's been difficult to apply them to other plants.

    Healthful tobacco?

    So far, tobacco has yielded the most plastid engineering successes. In 2000, plant geneticist Jeffrey Staub and his colleagues at Monsanto in St. Louis genetically altered tobacco chloroplasts so that they produced a correctly folded human protein called somatotropin, which is used to treat dwarfism in children. Maliga calls the findings a “milestone,” because they convinced skeptics that the bacteriumlike genetic machinery in plastids was capable of correctly folding mammalian proteins. The study also showed that the plastid-engineered plants don't modify proteins after synthesizing them—a major drawback when making human proteins in a plant's nucleus. Monsanto does not plan to commercialize the plants, which produce the protein at levels 300-fold higher than do their nuclear transgenic counterparts. The researchers chose to work with somatotropin, which has a well-studied structure, only as proof of principle; however, the company is now rumored to be working on expressing other human proteins in tobacco. “We may actually find something very useful to do with tobacco,” says retired Duke geneticist Nicholas Gillham.

    Tobacco isn't an ideal host, however. For one, it grows in a restricted geographical area. But, more important, it would be difficult to extract proteins of interest from the plants, which produce other troublesome compounds such as nicotine.

    Making a fruit or tuber with genetically transformed plastids would get around many of tobacco's limitations. The edible result would be big enough to contain large quantities of the compounds of interest. In 1999, Staub and his Monsanto colleagues produced potato plants with genetically modified plastids. But the tubers expressed the foreign genes at a concentration 100 times lower than that in the leaves, equivalent to levels achievable by nuclear transformation.

    Engineered tomatoes have met with more success. Ralph Bock and his colleagues at the University of Freiburg, Germany, spent more than 2 years developing transformation and regeneration conditions for a tomato. In the September 2001 issue of Nature Biotechnology they report proof of principle that the tomato fruit can be modified. The 1% expression levels the researchers achieved were fairly low, but on the bright side, the fruits produced fully half as much protein as the plants' green leaves. This suggests that the same tricks that help increase expression levels in tobacco leaves might lead to fruits that express engineered plastid proteins in large quantities. “These are first-generation expression levels,” says Heifetz, who expects that the researchers will be able to improve the fruits' output.

    Future harvests

    Photosynthesis is very inefficient, turning less than 1% of the incoming solar radiation into food. If it could be slightly improved, the face of agriculture would change dramatically, as plants could grow bigger with less sunlight.

    Until the invention of plastid transformation, scientists could not tinker with RuBisCO, the key carbon-fixing enzyme of photosynthesis, because half of its subunits are encoded in the chloroplast genome. Dozens of research groups have since tried to alter the molecule to improve the efficiency of photosynthesis, but none so far have succeeded (Science, 15 January 1999, p. 314).

    Now, however, plant physiologists Spencer Whitney and T. John Andrews at Australian National University in Canberra have taken a key step toward this goal, creating the first viable plants with an altered RuBisCO. They report in the 4 December 2001 issue of the Proceedings of the National Academy of Sciences that photosynthetic efficiency drops predictably when the RuBisCO of tobacco plants is replaced with the less efficient RuBisCO from the red alga Rhodospirillum rubrum. Even though the plants are less efficient, not more, the advance is exciting, according to Maliga, because it is “the first time that [researchers have] actually changed the properties of the photosynthetic machinery in a predicted fashion.”

    Andrews ties the success of the study to the spread of the technical skills needed to conduct plastid transformation. “I think it hasn't been done before because the technology for plastid transformation is not that widely dispersed,” says Andrews. As that knowledge becomes more widespread, the humble plastid may acquire mightier powers.

  14. AMERICAN GEOPHYSICAL UNION MEETING

    Of Ocean Weather and Volcanoes

    1. Richard A. Kerr

    SAN FRANCISCO, CALIFORNIA—The 8500 attendees of last month's meeting of the American Geophysical Union had some interesting poster presentations to choose from. Particular attractions for the perambulatory set included a possible volcano aborning, old sea-floor volcanoes, and predictions of next month's ocean storms.

    Next Month's Ocean Weather

    Drilling for oil in the deep Gulf of Mexico? Fishing on the Grand Banks? Looking for unfriendly submarines playing cat and mouse in the Gulf Stream? Then the U.S. Navy has just what you need. In a first, it is routinely making computer forecasts of the swirlings and churnings of the world ocean in enough detail to realistically render ocean “weather.” Similar numerical forecasting of atmospheric weather went operational in the late 1950s, but oceanographers had to await far faster computers to simulate the smaller scale “storms” within the ocean. Now, the position of Japan's mighty Kuroshio current or next month's ring current peeling off into the Gulf of Mexico is just a few clicks away (www7320.nrlssc.navy.mil/global_nlom/).

    The ocean forecasting milestone comes courtesy of the Naval Research Laboratory's (NRL's) Layered Ocean Model (NLOM), as described in a meeting poster by Ole Smedstad of Planning Systems Inc. and NRL colleagues, all located at the Stennis Space Center near Bay St. Louis, Mississippi. NLOM produces a 30-day forecast of ocean behavior with enough detail to portray 50- to 100-kilometer-wide eddies, the oceanic equivalent of atmospheric storms that typically span thousands of kilometers.

    Researchers first barely resolved ocean eddies in global models in the early 1990s by dividing the ocean into blocks 0.5 degree on a side and calculating the average properties of a block of water (Science, 2 April 1993, p. 32). NLOM now divides the ocean into one-16th-degree blocks, yielding a resolution of 6 to 7 kilometers at midlatitudes that clearly renders eddies.

    Going global.

    Computers now allow for 30-day forecasts of ocean behavior (here, ocean height).

    CREDIT: U.S. NAVAL RESEARCH LABORATORY, STENNIS SPACE CENTER

    The next trick is to make a forecast quickly. NLOM does that using a computationally efficient scheme for creating a snapshot of water movements globally and projecting those motions into the future. It also takes some shortcuts, such as not dealing with shallow waters over the continental shelf. And it relies on an IBM WinterHawk 2, a massively parallel computer that achieves speeds of 35 gigaflops using 216 processors. A decade ago, oceanographers had to settle for eight processors working in parallel to achieve speeds of little more than 1 gigaflops.

    Like atmospheric weather forecasting models, NLOM needs to be told what's going on at the moment, plus how wind and heat are driving ocean circulation. In place of weather stations, the model turns to satellites, taking in data on sea-surface temperature and the shape of the sea surface. (The sea-surface shape reflects the pattern of ocean currents the way highs and lows of atmospheric pressure reflect the way the wind must blow.) The model also takes in data on how the wind is blowing water at the ocean surface and how heat is flowing in and out of the ocean.

    Each day, the model creates a snapshot of the world ocean and then a 4-day forecast. On Wednesdays, it produces a 30-day forecast. It has been running since October 2000 and in an operational mode with public distribution of results since last October. Although the forecasts are useful out to at least 30 days in most areas, says Smedstad, in more dynamic regions, such as the area of the Gulf Stream, they are useful for only about half a month.

    That's long enough to pique the interest of a variety of forecast users. An oil company would like to know if a powerful eddy is about to sweep by its deep-sea drill rig. Fishing companies want to know where eddies of particularly warm or cold water might be harboring fish. Even researchers keeping tabs on whales in the northwest Pacific check forecasts. “What they have achieved—running a quite high-resolution model on a routine basis—is an important contribution,” says ocean modeler Allan Robinson of Harvard University, “but there's a great deal of work to be done yet.” Weather forecasters are still 40 years ahead of oceanographers.

    Oregon's Bulging Unabated

    Last spring, volcanologists studying satellite radar measurements were startled to find a 15-kilometer-wide swelling of the ground just west of central Oregon's Three Sisters volcanoes that threatened an eruption (Science, 18 May 2001, p. 1281). The bulge continues to grow, according to an instrument volcanologists installed on it to track the doming. But they are relaxing a bit after seeing signs that similar events may have occurred before without triggering eruptions. Although magma may yet break through to the surface, giving volcanologists a start-to-finish record of an eruption, they aren't holding their breath.

    The continued bulging has been confirmed by the Global Positioning System (GPS) receiver installed last May near the center of the uplift. Geodesist Michael Lisowski of the U.S. Geological Survey's (USGS's) Cascades Volcano Observatory in Vancouver, Washington, reported at the meeting that GPS has recorded a rise of about 30 millimeters per year, about the rate indicated by interferometric synthetic aperture radar (InSAR) up to last year.

    Taking the total uplift of 150 millimeters since bulging began in late 1997, Lisowski and his colleagues calculate that about 21 million cubic meters of magma has risen into a chamber located 6 to 7 kilometers beneath the bulging surface. That volume is equivalent to a sphere 350 meters in diameter, or one-tenth the volume of magma injected into the heart of Mount St. Helens before its cataclysmic eruption.

    Although some of the hundreds of small volcanic vents in the area have erupted as recently as 1200 years ago, volcanologists aren't betting on any fireworks soon. Experience at Northern California's Long Valley, which is the scar of an ancient caldera-forming eruption, provides some reassurance. A central bulging there has totaled more than 750 millimeters since 1979. That bulging was often accompanied by earthquakes as large as magnitude 6 or swarms of smaller quakes indicative of pressurized fluids breaking through rock a few kilometers down. Yet there's been no eruption, and the valley is quiet again. And chemical analyses of springs on the Oregon bulge suggest that there have been earlier magma injections. Warm springs there contain elevated sulfate, chloride, and helium-3, confirming that water has percolated down near deep-seated magma and returned to the surface. But analyses made in the 1980s in the course of exploration for geothermal energy also show elevated sulfate and chloride in the area, suggesting that one or more pulses of magma injection preceded the current episode without triggering an eruption.

    “We wouldn't be surprised if this died out,” says geodesist Charles Wicks of USGS in Menlo Park, California. But then again, no one's turning off the GPS yet.

    Plate Tectonic Benchmarks Adrift

    The founders of plate tectonics considered the fount of molten rock that feeds Hawaii's volcanoes to be immobile, a fixed marker on a planet where everything but volcanic hot spots like Hawaii shuffles about endlessly. In the plate tectonics canon, great plumes of hot rock rising from deep in the mantle feed the 40 or so hot spots such as Hawaii, Yellowstone, Iceland, Pitcairn, and the Galápagos. But doubts about the fixity of hot spots eventually arose. Now, deep-ocean drilling into the string of volcanic seamounts trailing away from Hawaii confirms that this hot spot, at least, has moved, at times as fast as some drifting continents. That suggests that reconstructions of long-past arrangements of plates around the Pacific require correction—including those supporting far-traveled bits and pieces of continent being plastered onto North America. It also implies that if plumes do feed hot spots, they are being buffeted by the “wind” of a vigorously churning mantle.

    Hawaii marks the hot spot.

    Both plate motion and a moving hot spot shaped the string of volcanoes trailing Hawaii.

    SOURCE: J. TARDUNO AND R. COTTRELL/UNIVERSITY OF ROCHESTER

    The job of testing hot-spot fixity has fallen largely to paleomagnetists. In principle, they can decipher where on Earth—or at least at what latitude—a volcanic rock formed: They measure the orientation of Earth's magnetic field frozen into a rock when it solidified from lava. Because Earth's field is horizontal at the equator, vertical at the magnetic pole, and tilted in proportion to its latitude in between, determination of the inclination of a rock's locked-in field can yield its latitude at the time it was formed. That means a core of rock drilled from one of the now-submerged seamounts—built one at a time in a string across the Pacific plate as it moved over the Hawaiian hot spot—should tell a paleomagnetist whether that volcano formed over the present location of the hot spot, 19°N.

    Researchers did just that in the 1970s and 1990s at two seamounts in the northern North Pacific—Suiko and Detroit—and found that they had formed more than 1000 kilometers north of the present location of the Hawaiian hot spot. But the limited data didn't convince most researchers that the hot spot had moved.

    Paleomagnetist John Tarduno of the University of Rochester, New York, wanted to convince his colleagues that hot spots can move, so he arranged to do the job right. He and colleagues persuaded the Ocean Drilling Program to send its drill ship JOIDES Resolution to the Emperor chain of seamounts northwest of Hawaii for a full 2 months of drilling. Last summer, he, co-chief scientist Robert Duncan of Oregon State University in Corvallis, and the shipboard scientific party of Leg 197 drilled four seamounts, penetrating a total of 1200 meters of volcanic rock beneath the bottom sediments. That was enough to average out short-term variations in the orientation of the magnetic field, something earlier, less ambitious drilling had not done convincingly. With these higher quality data from more sites, preliminary shipboard analyses suggest that the Hawaiian hot spot moved southward at rates as high as 30 and 50 millimeters per year—faster than North America is moving—between 81 million and 43 million years ago.

    “It's a very nice result,” says Tarduno, who presented his findings on his poster. “It's pretty hard to argue against hot-spot mobility now.” Tectonophysicist Seth Stein of Northwestern University in Evanston, Illinois, agrees. “His results raise a very major question about the long-term fixity of hot spots,” he says. “You're talking about hot-spot motion at the speed of plates. A lot of questions about mantle dynamics and plate motions will need rethinking.”

    Geophysicists have already been rethinking plumes and why their hot spots might move. Richard O'Connell of Harvard University and Bernhard Steinberger of the University of Colorado, Boulder, have modeled how the slow churning of the mantle might drive hot-spot motion. When they put Pacific plumes into their simulation of mantle motions, most plumes swayed toward the south the way a rising smoke plume would bend in the wind. That would move the upper end of the plume and thus the hot spot about 10 millimeters per year, slower than Tarduno has it but clearly moving. If the Hawaiian plume is blowing in the wind, the sharp bend in the Hawaiian-Emperor chain that had been attributed to a postulated change in the direction of the Pacific plate's motion—something no one has been able to find—may mark an abrupt shift in the mantle wind.

    Moving hot spots would also require rethinking how plates moved in the past. Paleomagnetist Robert Butler of the University of Arizona in Tucson notes that if the Hawaiian hot spot did move rapidly southward, then the long-vanished Kula ocean plate would not have moved northward along western North America as fast as calculated under the assumption of stationary hot spots. It was the Kula plate that supposedly carried chunks of continent from the latitude of Baja California and plastered them where British Columbia is today (Science, 12 September 1997, p. 1608). If the Kula plate didn't move as fast as assumed, says Butler, the Baja-British Columbia scenario won't work because the bits of continent couldn't be moved as fast as required.

    Hot-spot motion could also allow paleomagnetists to invoke little or no “true polar wander” to explain certain observations. True polar wander is theoretically possible, but researchers have long debated whether it has ever happened (Science, 10 April 1987, p. 147). One contingent sees the paleomagnetic data requiring the entire solid Earth to tumble like a rolling ball beneath the pole, making the pole move over millions of years. Others see hot-spot motion contaminating the paleomagnetic data. Perhaps seamount rock can settle that one too.

  15. WOMEN IN SCIENCE

    China Debates Big Drop in Women Physics Majors

    1. Yang Jianxiang*
    1. Yang Jianxiang writes for China Features in Beijing.

    Women made up more than one-third of the physics majors at top Chinese universities in the 1970s. Now their numbers are far below those in the West. What happened?

    BEIJINGDuring much of the 1970s, more than one in three physics students at two of China's top universities was a woman. Today, the number has plummeted to fewer than one in 10. This shift toward male domination of the field—the opposite of the trend in the United States—is prompting concern among many academics. But to others, it simply reflects a return to a more normal gender balance in physics in the generation after China began to open up to the West.

    Shifting gears.

    Chinese physicist Wu Ling'an says decline in the number of women students (top) must be reversed.

    PHOTO CREDIT: JAMES ZENG-HUANG/CHINA FEATURES; GRAPH SOURCE:BEIJING UNIVERSITY, NANJING UNIVERSITY

    The trend is striking. For several years in the 1970s, women represented an average of 42% of the undergraduate physics majors at Beijing University* and 37% at Nanjing University. By the 1990s, however, women's share of the total was a mere 9% and 8%, respectively. In contrast, the percentage of women U.S. physics majors stood at only 9% in 1978 but has climbed steadily and reached a record high of 21% in 1999.

    “It's a backward movement that must be checked,” says Wu Ling'an, a senior physicist with the Chinese Academy of Sciences (CAS), about the Chinese numbers. Women are needed for their skills as well as for the different perspectives that they may bring to a problem, says Wu, who is helping plan an international conference on women in physics next March in Paris.

    But other Chinese physicists say that the earlier numbers were greatly inflated. The government needed to produce large numbers of technically trained workers, and it had the means to do so by dictating what students would be allowed to study. “Many students who scored the highest marks in entrance examinations were assigned to study physics even if they did not apply for it,” recalls one Beijing University physics graduate who had originally planned to major in mathematics. Li Lin, a CAS academician who trained in the United Kingdom after World War II, believes that these factors merged with another force driving women into physics: the desire to assert their equality in China's new society by breaking into fields traditionally closed to them.

    But getting a degree didn't mean that women were given the same job opportunities as their male counterparts. Instead, traditional assumptions about gender roles meant that women often were assigned to low-level positions that wasted their talents, says Li Fanghua, one of the two women academicians at the Physics Institute of CAS. The late Xie Xide, a pioneering woman physicist and former president of Fudan University, pointed out this inequity in the 1960s, a decade before the presence of women physicists had reached its peak. But it was only after the country began to open up its economy to the West in the 1980s that the numbers dropped. Within a few years they were down to about 12%, and by the 1990s they had dipped into single digits. (At Qinghua University, another elite school, the share of women physics students has held steady at about 10% ever since the department was reinstated in the late 1970s after the Cultural Revolution.)

    Wu deplores the sharp decline, blaming it in part on what she says is the media's current message to women: “It's better to choose a good husband and take care of the kids at home rather than working as a man's equal in the office or lab.” The education and employment systems reinforce that message, she adds, by making it hard for women to return to the scientific workforce after having a child.

    These messages are particularly powerful in steering women away from the physical sciences, say several female physics graduates, because they feed on existing stereotypes that paint an unflattering picture of physics as a career. “I loved physics in high school,” says Gu Dongmei, who chose a career in administration after graduating from Beijing University in 1988 as a physics major. “But after 4 years at the university I realized that physics could be too much of a challenge for a lifetime career.”

    Even those who remain in the field often choose teaching over research, says Xie Yicheng, who gave up a research job at the CAS high-energy physics institute to become a professor in the applied physics department at Beijing Industrial University. Higher education, she notes, offers a more flexible academic schedule, regular vacations, and greater opportunities for interaction with colleagues and students.

    Wu says that the rest of society could easily accommodate those needs “by providing more day-care centers and making it easier for women to go back to school after going on maternity leave.” But Li Fanghua warns that even small changes could be difficult without a major rethinking of how society views working women. “It's not so much discrimination as it is a legacy of feudal stereotypes,” Li says about performance reviews involving academic women. “It's something habitual and invisible.” Those invisible obstacles, all too familiar to women scientists in Western countries, suggest that it may be a long time before China relives an era in which women poured into physics.

    • *This spelling of Beijing and Qinghua universities is consistent with the style used by the Chinese government and national media, although the universities themselves prefer Peking and Tsinghua.

  16. BRAZIL

    Tough Placebo Rules Leave Scientists Out in the Cold

    1. Cassio Leite Vieira*
    1. Cassio Leite Vieira writes from Rio de Janeiro, Brazil.

    Brazil's bioethics commission opposes placebos in most multinational trials, despite a shifting stance by the World Medical Association

    RIO DE JANEIROBrazilian psychiatrist Márcio Versiani was hoping to be a co-investigator on a multinational study of patients suffering from panic attacks. The clinical trial is designed to test the efficacy of Effexor (venlafaxine), an antidepressant not yet approved for panic disorders, against an existing treatment, Paxil (paroxetine). A third group receives a placebo, allowing researchers to account for the well-known phenomenon in which some patients respond positively to any medical intervention.

    The study is now under way in the United States, Canada, Argentina, Mexico, and Chile—but not in Brazil. Versiani's application last year was one of several turned down by Brazil's federal research ethics commission, CONEP, because of its opposition to the use of placebos in international trials. That hard-line stance has infuriated many Brazilian mental health researchers, who complain that their ability to keep up with the field has been compromised. “We have been off the worldwide clinical research map for a year,” says Versiani. But Brazilian officials say they have no plans to modify their position, which stands at one extreme in an ongoing global debate over the use of placebos.

    Part of Brazil's National Council on Health, CONEP reviews research projects in areas thought to pose the greatest ethical dilemmas. Among these are the fields of genetics and human reproduction, studies that focus on vulnerable minorities (Indians and the mentally ill, for example), new pharmaceutical products and vaccines, and any research project with humans that involves foreign participation. In 2000 the commission rejected only 17 of 958 research proposals, but 11 of those—some 65%—were denied because of the “nonjustifiable use of a placebo.” Explains Corina Bontempo de Freitas, general secretary of the commission: “From society's point of view, the interest is in knowing if a new drug is better than one that is already known, not if it is better than a placebo.”

    Going backward.

    Psychopharmacologist Elisaldo Carlini calls new policy “an absurdity, a regression.”

    CREDIT: STELA MURGEL

    The commission laid out its views in a March 2000 document prepared for the health council, which operates the commission. It declared that “in any medical trial, all patients, including those in the control group, if there is one, must be assured of the best diagnostic and therapeutic treatment.” That stance is similar to a statement adopted in October 2000 by the World Medical Association (WMA), which tightened the 1964 Declaration of Helsinki on patient rights (Science, 20 October 2000, p. 418). However, the statement stirred up such controversy that the WMA revisited the issue this fall, declaring that under certain circumstances “a placebo-controlled trial may be ethically acceptable, even if proven therapy is available” (http://www.wma.net/). The exceptions are for “compelling and scientifically sound methodological reasons” or for “minor treatment” that does not subject the patient to additional risk.

    Many Brazilian researchers say that the commission's position makes no sense. “It's a return to the days of the caves, an absurdity, a regression,” says Elisaldo Carlini, a professor at the Federal University of Säo Paulo and the first Brazilian to be elected to the United Nations' International Narcotics Control Board. Rejecting such trials has a host of negative consequences, say researchers, from the loss of information about the effects of certain drugs on Brazil's diverse population to a loss of revenue and fewer international collaborations. Researchers say that CONEP's rulings also reflect an “ideological bias” against research with foreign partners.

    De Freitas denies that there is any such bias and instead argues that scientists need to understand their role in society. “It is not democratic to have some citizens who are considered above suspicion,” she says. De Freitas also says that the community is overreacting to the commission's attempt to exercise prudent oversight. CONEP has approved some drug trials using placebos, she notes, such as a recently approval trial to test a new drug for Alzheimer's patients “when there is no alternative for treating the illness.”

    CONEP's critics include representatives of multinational research companies in Brazil, who say that the country's system for judging the ethics of research protocols is neither objective nor consistent. “If a Brazilian laboratory wants to test a new drug using a placebo group but no foreign partners, this research would not even go before CONEP,” says Joao Massud, a physician and medical director for Bristol-Myers Squibb Brasil S.A. Instead, the protocols would be reviewed by ethics committees from the research institution proposing the study.

    Others, however, think that the 4-year-old commission is doing a fine job. “CONEP should be a source of pride for the country,” says Fabiola de Aguiar Nunes, regional coordinator for the Brasília office of the Oswaldo Cruz Foundation, an important research center and producer of vaccines and medications in Brazil. “It is protecting our population.”

    Despite the recent strategic retreat by WMA, Brazilian officials appear to be standing firm. A new bioethics panel was formed last summer to review national policies relating to human cloning and reproduction, transgenic research, biosecurity, and other issues. But it “has no intention of reviewing any [of CONEP's] rulings, nor of addressing the issue of using placebo in international multicenter studies of new drugs,” says Cláudio Duarte da Fonseca, science policy secretary for the Ministry of Health, which operates the commission.

    Such strongly held views leave Versiani and other researchers gloomy about finding a middle ground in the debate. In the meantime, they say, every passing month adds to the loss of scientific opportunities. “While we watch our national capacity for scientific and technical development stagnate and regress,” he says, “the other countries [not excluded from the trials] continue to move forward.”

  17. SCIENCE EDUCATION

    U.S. Programs Ask Faculty to Help Improve Schools

    1. Jeffrey Mervis

    Federal officials admit that two new partnership programs are only one step in training better teachers and raising test scores

    This month the National Science Foundation (NSF) will unveil guidelines for a new $160 million program to improve math and science education in the nation's elementary and secondary schools. The effort builds on the latest buzzword in science education: partnerships. In this case, the intended partners are university scientists and local school districts. But don't look for poorly trained teachers to suddenly turn into math whizzes or for low-scoring U.S. students to become number one in the world. “It's going to take years and years, and there are no magic bullets,” says Judith Ramaley, who took over the foundation's $975 million education directorate in August. “This is still a research initiative.”

    That candor may be unusual for a federal official. But education researchers and educators—who generally support the new program—say that it's the right attitude for these yet-to-be-formed partnerships. The idea, proposed by President George W. Bush last spring in his first budget request to Congress, is for university scientists to help bolster the skills of undergraduates preparing to become teachers and mentor professionals who seek additional training. The partnerships are the latest twist in a decade-long “systemic reform” effort at NSF, now winding down, that prodded states and cities to make structural changes in how math and science are taught—with uneven results (Science, 4 December 1998, p. 1800). Ramaley expects the new program to grow significantly over the next several years and become NSF's flagship education effort.

    Colleges of education currently train most of the nation's precollege teachers, and relatively few students who major in science or math go into teaching. That bifurcated system has contributed to a shortage of teachers in most technical subjects and has produced a generation of teachers with inadequate training in science and math, say Ramaley and Susan Sclafani, who will oversee a similar but much smaller program at the Department of Education (DOE) (Science, 4 January, p. 24). “Last year in Texas we certified 330 people to teach math, [but] in Houston alone, with just 5% of the state's student population, we had openings for 100 math teachers,” says Sclafani, counselor to Secretary of Education Rodney Paige, the former superintendent of Houston city schools who brought Sclafani with him to Washington, D.C.

    Ramaley, who fostered various town-gown partnerships during stints as president of Portland State University in Oregon and the University of Vermont in Burlington, says that the goals of the new programs are clear: “We want to reduce the number of teachers teaching out of field [without the appropriate degree], increase the availability of material that engages students, and raise the number of students taking courses that prepare them for college.” Science educators are eagerly awaiting the rules for the NSF competition. (Check http://www.ehr.nsf.gov/ for the announcement.) Applicants for the 5-year awards will have considerable latitude in defining how to meet those goals, Ramaley says, and the school district-university partnership may also include other universities, museums, professional societies, businesses, and other institutions. DOE plans to get out a similar announcement by mid-March.

    But a clear vision doesn't eliminate the obstacles, Ramaley admits. “We've learned over the past decade that you need sturdy leadership, clear goals, and hard evidence that guides your intervention. But we haven't learned how to disseminate good practices, [scale up] prototypes, and sustain the effort in the face of leadership transitions, departing teachers, high student mobility, and competing political agendas.” There's also the ever-present problem of money. “For most cash-strapped districts, the only source of new funds is external,” she says. “However, we also expect districts to show how they plan to continue these reforms once the federal dollars go away.”

    Those in the trenches point out several more potential pitfalls. “Creating a sustainable partnership is not a trivial exercise,” says Margaret Cozzens, vice chancellor for academic affairs at the University of Colorado, Denver, and a former head of NSF's elementary and secondary education programs. “Each side has something to offer, but if one side tries to dominate, then the other may end up losing interest.” George Miller, a chemistry professor at the University of California, Irvine, who has participated in NSF-funded partnerships to help minority students and in a statewide teacher-training program, worries that research scientists might not be attuned to the needs of students in low-performing schools. “Most of them didn't attend such schools,” he notes. “The idea that students would simply not do their homework would boggle their minds; their first reaction would be to just flunk them!"

    More knowledge isn't the only thing that many teachers lack, say science educators. “One thing we've learned from the systemic reform projects is that math and science expertise is necessary but not sufficient,” says Iris Weiss, head of Horizon Research Inc., an evaluation firm in Chapel Hill, North Carolina. “You also have to know how to reach the kids.” Good lab materials and appropriate technical support are also key ingredients. “You need someone to keep the fish alive and provide new seeds,” says Philip Sadler, who heads science education programs for the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. “That doesn't require a Ph.D., but you need to love science.”

    Ramaley and Sclafani hope after 5 years to have a database of exemplary practices that local districts will mine. But some researchers worry that intense political pressure to show immediate gains in student performance will push NSF to favor tried-and-true remedies rather than innovative approaches. Even positive results might be hard to interpret, warns assessment expert Jere Confrey of the University of Texas, Austin. “Before we can replicate successful programs,” she says, “we must be able to understand why something worked.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution