News this Week

Science  21 Jan 2011:
Vol. 331, Issue 6015, pp. 268
  1. High-Energy Physics

    Deep Potholes Block the Road to Discovery for U.S. Science

    1. Adrian Cho

    When the Department of Energy (DOE) decided last week to shut down the atom smasher at the last dedicated U.S. particle physics lab this fall, some scientists breathed a sigh of relief because the move opens the way to pursue a variety of new projects. But, hang on! U.S. particle physicists still face a bumpy ride along an uncertain road.

    It wasn't long ago that U.S. particle physicists formulated a broad, coherent, and realistic road map to carry the field through the next decade. But that program has now been hit with bureaucratic snags, delays, and unexpected expenses. Some observers even wonder whether DOE, which embraced the plan for its $810-million-a-year high-energy physics program, is capable of implementing it. “I'm not saying it's impossible, but it's going to be very difficult,” says Michael Lubell, a lobbyist with the American Physical Society in Washington, D.C.

    Sticking points.

    Two of the biggest projects in the U.S. domestic particle physics program, LBNE and WFIRST, face potentially fatal obstacles.

    CREDITS: (SOURCES) DOE, LBNL, U.S. LHC, FERMILAB, SLAC, NRC; (PHOTOS) CERN; NASA

    Four years ago, particle physicists in the United States faced a crisis. They knew that the Tevatron collider at Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, would be eclipsed by the higher-energy Large Hadron Collider (LHC) at the European particle physics laboratory, CERN, near Geneva, Switzerland. U.S. researchers hoped to build an even bigger machine, known as the International Linear Collider (ILC), to follow on the LHC, which started taking data last March. But DOE officials blanched when, in February 2007, physicists estimated that the ILC would cost at least $7 billion. Warning that the machine could not be built until the mid-2020s, DOE then asked the community to propose less-ambitious projects that would fill the gap.

    In a May 2008 report, researchers laid out a three-pronged approach for DOE-backed particle physics. The first priority, labeled the “energy frontier,” would be U.S. participation in the LHC. Fermilab would shift its focus to the “intensity frontier,” using a new proton accelerator to generate neutrinos and other familiar particles and study them in exquisite detail. At the same time, particle physicists would join cosmologists and astrophysicists on the “cosmic frontier” to identify the mysterious dark matter whose gravity binds the galaxies and the bizarre, space-stretching dark energy that's accelerating the expansion of the universe.

    The plan was designed to sustain the field after the 25-year-old Tevatron was shut down, and some researchers were anxious to get on with it. DOE officials reiterated the department's commitment to go in that direction when they announced that there wasn't enough money to run the Tevatron until 2014, 3 years past its scheduled end (Science, 14 January, p. 131). “I think that it's good there's a clear path ahead,” says Mark Messier, a physicist at Indiana University, Bloomington, and co-spokesperson for NOνA, a neutrino experiment based at Fermilab.

    But recent developments have left question marks around two of the biggest projects in the DOE road map. In particular, a dustup at the National Science Foundation (NSF) could crimp Fermilab's new studies of neutrinos. The elusive particles come in three types—electron neutrinos, muon neutrinos, and tau neutrinos—that morph into one another as they zing along.

    Fermilab scientists already study such “neutrino oscillations” in the MINOS experiment, in which they fire muon neutrinos through 730 kilometers of earth to a detector in the Soudan mine in northern Minnesota. And NOνA, already under construction, aims to nail down the last parameters describing neutrino oscillations. Starting in 2013, a team of 180 researchers will begin firing twice as many neutrinos at a detector weighing at least 15,000 tons 800 kilometers away in Ash River, Minnesota.

    Around the same time, Fermilab researchers hope to begin building the center pieces of their new program. They are a detector, called the Long-Baseline Neutrino Experiment (LBNE), that weighs up to 200,000 tons and a $1.8 billion proton accelerator, called Project X, that would generate neutrino beams three times more intense still. LBNE would compare neutrinos and antineutrinos in search of a difference that could explain why the universe contains so much matter and so little antimatter.

    LBNE may be homeless, however. It is supposed to reside in the $875 million Deep Underground Science and Engineering Laboratory (DUSEL) that researchers want NSF to build in an abandoned gold mine in Lead, South Dakota. But last month, the National Science Board, which sets policy for NSF, turned down a request for $29 million to finish DUSEL design work, objecting to NSF's plan to spend $480 million on infrastructure (Science, 17 December 2010, p. 1596.

    DOE and NSF are mulling new ways to split costs, says Young-Kee Kim, Fermilab's deputy director. And Fermilab does not plan to lay off any of its 1900 staff members after the Tevatron shuts down, she says. But that could change if its future flagship program is significantly delayed.

    There's trouble for the largest project on the cosmic frontier as well. In 1998, two teams of scientists probed the expansion of the universe by studying stellar explosions called type Ia supernovae and discovered that it is accelerating. That finding suggested that some bizarre dark energy is stretching space. Saul Perlmutter, a physicist at Lawrence Berkeley National Laboratory in California and the leader of one team, realized that scientists, to figure out what the stuff is, needed a dedicated space telescope to spot supernovae and probe dark energy in other ways. In 1999, his team proposed that DOE build the $600 million Supernova Acceleration Probe. In 2003, the proposed space telescope evolved into the Joint Dark Energy Mission, a $1 billion project funded jointly with NASA.

    Last August, it looked like the idea might finally get off the ground when a National Academies' panel ranked it as the most desired space-based instrument for the coming decade in astronomy and astrophysics (Science, 20 August 2010, p. 894). The panel actually suggested combining it with two missions needing similar technical capabilities and renaming it the Wide-Field Infrared Survey Telescope (WFIRST). But 3 months later, hopes of an early launch for the now $1.6 billion project were dashed when NASA officials revealed that the cost of another mission, the James Webb Space Telescope, had ballooned from $5.1 billion to as much as $6.8 billion (Science, 19 November 2010, p. 1028).

    The way forward.

    Particle physicists hope to probe the origins of the universe with experiments at (counterclockwise from top left) the LHC and Fermilab and on space-based facilities like WFIRST.

    CREDITS (TOP TO BOTTOM): THE ATLAS EXPERIMENT AT CERN, HTTP://ATLAS.CH; JDEM INTERIM SCIENCE WORKING GROUP, GSFC, NASA; FERMILAB

    Facing a huge budget crunch, NASA has convened a “science definition team” to come up with something more affordable. “There have been fairly strong indications from NASA that they would be able to go forward with the mission in this decade only if it were less than $1.6 billion,” Perlmutter says. The team hopes to reduce the mission's cost to below $1 billion, he says. But the science definition team, led by James Green of the University of Colorado, Boulder, answers to NASA alone, Perlmutter notes. It's not clear what role, if any, DOE may play.

    Some fret that U.S. participation in the LHC experiments could also be at risk if the domestic parts of DOE's particle physics program wither. Scientists say the United States got a bargain when it bought into the $5.5 billion LHC program by agreeing to provide $531 million in hardware. Already, 1200 of the 2800 U.S. experimental particle physicists are working on the LHC's four large particle detectors, which could spot massive new particles such as the long-sought Higgs boson, the prize Fermilab physicists hoped to nab with the Tevatron.

    But APS's Lubell worries that U.S. legislators may lose interest in particle physics if all the action shifts to Europe. “Will Congress be amenable to funding a high-energy physics program that's based in Geneva?” he says. “I'd be concerned.”

    DOE may also struggle to meet sudden new demands on its particle physics budget. DOE officials shut down the Tevatron because they could not find the $35 million per year needed to keep it going. DUSEL developers may need several million dollars in short order to stay afloat. And last month, the Italian government announced that it would build a $520 million electron-positron collider, called SuperB, using $170 million worth of parts from a defunct machine called PEP-II at SLAC National Accelerator Laboratory in Menlo Park, California. If U.S. researchers want to do more than just donate equipment to the project, DOE may have to pony up a few million dollars to support any experiments they propose.

    Nobody expects that U.S. particle physicists will end up driving Chicago taxicabs after the Tevatron closes. But thanks to some unforeseen new curves, they may need to carry out some deft political maneuvers to continue down the road to scientific discovery.

  2. Food Safety

    Food Safety Law Will Likely Strain FDA Science

    1. Erik Stokstad
    New leaf.

    Science-based standards will help farmers prevent food-borne disease.

    CREDIT: RICHARD GREEN/THE CALIFORNIAN/AP PHOTO

    Contaminated food kills more than 3000 people each year in the United States and sickens more than 48 million, and recalls can cost the food industry many millions of dollars. A major food safety bill signed into law this month gives the U.S. Food and Drug Administration (FDA) new powers and aims to shift the focus from response to prevention of food-borne illness. “It's a big step forward,” says Marion Nestle of New York University. But achieving the goals of the law is going to be a stretch for FDA scientists, who are already spread thin.

    The law, called the Food Safety Modernization Act, calls for science-based guidance to help farmers and food processors prevent contamination, and it moves FDA toward a risk-based approach to monitoring the food supply. But FDA research is underfunded, lacks needed expertise, and faces serious hurdles in gathering data to meet these targets, according to recent reviews by the FDA's science advisory board and the National Academies. “They're not in a great position to do what this act requires them to do,” says Douglas Archer of the University of Florida, Gainesville, an author of the June 2010 academies' report.

    FDA regulates the safety of all domestic and imported food except poultry, egg products, and meat, which fall under the jurisdiction of the U.S. Department of Agriculture (USDA). A reaction to high-profile contamination of peanut butter, cookie dough, and other foods in recent years, the law requires more frequent inspections of food processing facilities—as often as every 3 years—based on the level of risk they pose. Now 56% of facilities go more than 5 years without inspection. In addition, FDA officials can now force companies to recall food products.

    The food industry will have to step up, too. Under the new law, FDA will require every facility in the food supply chain to devise and institute policies to lower the chance of contamination. Within 18 months, FDA must issue regulations for how companies should analyze and minimize hazards. “This is going to prompt the private sector to invest more in food safety research,” predicts former director of FDA's Center for Food Safety and Applied Nutrition (CFSAN) Robert Brackett, now of the Illinois Institute of Technology in Chicago. Although large food processors and manufacturers employ scientists, midsize operations may need to turn to outside experts for help assessing risk. “It's going to require fairly massive input,” says Glenn Morris of the University of Florida, Gainesville. Operations that sell less than $500,000 worth of food are exempt.

    There will also be more work for the roughly 170 FDA scientists. For fruits and vegetables, FDA must create safety standards for farmers intended to prevent contamination from microbes or environmental pollutants. In 2006, for example, Escherichia coli in California spinach killed three people, prompting FDA to issue a nationwide warning not to eat bagged spinach. Investigators suspect that the microbes came from wild pigs wandering into the fields or irrigation water contaminated by cattle feces.

    It's a big job to create safety standards for all types of fruits and vegetables. Scientists will have to evaluate a range of questions, such as whether different water quality will be required for various crops; standards may vary depending on the type of irrigation used, for example. FDA must start its rulemaking process for these safety standards within 9 months, but it will likely take several years to complete. (Again, small farms are exempt.)

    Every 2 years, FDA scientists will have to rank the greatest risks to food safety and offer suggestions to industry on prevention. This will mean developing ways to compare the relative risk of various hazards—Salmonella in tomatoes versus hepatitis A in green onions, for example. “This is going to be a very steep learning curve,” says Robert Buchanan of the University of Maryland, College Park, who was a science adviser at CFSAN from 1999 to 2008.

    A larger problem is obtaining the data to evaluate risks. Information technology at FDA has critical shortcomings, Archer says: “They'll need a huge overhaul.” The computer systems used by FDA to track problems with the food supply, for example, aren't compatible with those of USDA's food safety investigators or those of the U.S. Centers for Disease Control and Prevention, which keeps tabs on food-borne illness in humans. Unlike most nations, the United States doesn't have a comprehensive surveillance system to monitor food safety nationally, Morris says—because it is the only industrialized country that splits its food safety regulatory authority among agencies. Nor does FDA have the necessary expertise in epidemiology, statistics, and mathematical modeling.

    Perhaps the biggest hurdle will be funding the new mandates—estimated to cost an additional $1.4 billion over 5 years—given a new Congress intent on cutting budgets. Representative Jack Kingston (R–GA), who will head the agriculture appropriations subcommittee, which is responsible for FDA, told The Washington Post last month that “the case for a $1.4 billion expenditure isn't there.”

    FDA Commissioner Margaret Hamburg said in a press conference last week that FDA will be able to partner with industry and states to accomplish some of the goals. “There's a lot that we can do both quickly and meaningfully” she said, pointing to progress in developing new standards for fruits and vegetables. But this will be just piecemeal progress unless Congress ponies up, Buchanan says: “If they don't give them the resources to do this, they're putting FDA in a totally untenable situation.”

  3. ScienceNOW.org

    From Science's Online Daily News Site

    CREDIT: THINKSTOCK

    Pandas Prefer Old-Growth Forests Giant pandas are picky about what they eat—and where they live, too, according to a new analysis published online in Biology Letters. Using data from China's National Panda Survey, a 1999–2003 census of all the pandas in China, ecologist Wei Fuwen of the Chinese Academy of Sciences' Institute of Zoology in Beijing and colleagues examined how different variables such as forest age and tree diameter corresponded to the numbers of pandas in different habitats. They found that pandas tend to live where bamboo is most plentiful and, given a choice, in forests undisturbed by logging, also known as old-growth forests.

    The finding argues for preferential protection of these old-growth forests, says Wei. He may already have something to celebrate: In late December, the government decided to extend a decade-old ban on logging in state forests throughout the upper reaches of the Yellow and Yangtze rivers—panda country—through 2020.

    Planck Reveals Stellar Hatcheries The European Planck satellite has released its first results since its 2009 launch. Among them is a catalog of more than 10,000 cold, dense dust clouds in which star formation takes place. Many of these clouds, the coldest known objects in the universe, are much larger than expected, the Planck team reported last week at the American Astronomical Society meeting in Seattle, Washington.

    Planck's ultimate goal is to learn more about the evolution of the universe by mapping the cosmic background radiation, the faint afterglow of the big bang. That could take another year or two, say the researchers. In the meantime, astronomers are using the Herschel space observatory to study the size and structure of these cold clouds. The catalog includes objects that correspond to different stages in stars' prenatal evolution, say Planck researchers, allowing astronomers to investigate the mysterious final steps leading up to stellar birth.

    CREDIT: GEMINI OBSERVATORY/AURA/LYNETTE COOK

    Black Hole a Record Breaker Even among cosmic heavyweights, the black hole in the core of galaxy M87 stands out. New observations reveal that the object weighs in at a whopping 6.6 billion suns, making it the most massive black hole for which a precise mass has ever been measured.

    Previous measurements of the giant didn't compensate for the blurring effects of Earth's atmosphere. So astronomer Karl Gebhardt of the University of Texas, Austin, and colleagues utilized the adaptive optics capability of the 8.1-meter Frederick C. Gillett Gemini Telescope on Mauna Kea, Hawaii, to measure the speed of stars orbiting the hole. This allowed them to calculate the hole's mass—the most accurate ever obtained for such an object. The black hole's event horizon—the edge from within nothing can escape, not even light—is four times as large as the orbit of Neptune, the outermost planet in our solar system, the team reported last week at the American Astronomical Society meeting in Seattle, Washington.

    Finding the most massive black hole isn't just a highlight for the record books. Gebhardt says studying extreme black holes like the one in M87 gives astronomers their best chance of understanding black hole physics, including gathering direct observational evidence for the existence of event horizons.

    CREDIT: BENOÎT GINESTE

    Flipper Bands Harm Penguins Metal flipper bands used by scientists to identify and track individual penguins in the wild could be hurting their wearers, according to a paper published in Nature.

    Inspired by studies suggesting that the bands had a negative impact, physiologist Yvon Le Maho of the French National Center for Scientific Research's Centre d'Ecologie et Physiologie Energétiques in Strasbourg, France, and colleagues placed bands on half of a group of 100 electronically tagged king penguins living on Possession Island near Antarctica. After 10 years, 36% of the unbanded birds were still alive, compared with only 20% of the banded birds. The banded birds also produced fewer chicks.

    The researchers suggest several explanations, including the possibility that the bands slow the king penguins down while swimming. It is unclear whether the bands are equally harmful to other species of penguin, how environmental factors come into play, or how long it takes the bands to have an effect.

    Read the full postings, comments, and more at http://news.sciencemag.org/sciencenow.

  4. Statistics

    ESP Paper Rekindles Discussion About Statistics

    1. Greg Miller

    The decision by a top psychology journal to publish a paper on extrasensory perception (ESP) has sparked a lively discussion on blogs and in the mainstream media. The paper's author, Daryl Bem, a respected social psychologist and professor emeritus at Cornell University, argues that the results of nine experiments he conducted with more than 1000 college students provide statistically significant evidence of an ability to predict future events. Not surprisingly, word that the paper will appear in an upcoming issue of the Journal of Personality and Social Psychology (JPSP) has provoked outrage from pseudoscience debunkers and counteraccusations of closed-mindedness from those willing to consider the possibility of psychic powers.

    Do the math.

    In the wake of a controversial paper about ESP, some statisticians are arguing that social scientists should use statistical methods based on Bayes' theorem (above).

    PHOTO ILLUSTRATION: (PHOTO) © CAROL AND MIKE WERNER/ALAMY; (TEXT OVERLAY) ADAPTED FROM BAYES' THEOREM BY E.-J. WAGENMAKERS ET AL.

    It has also rekindled a long-running debate about whether the statistical tools commonly used in psychology—and most other areas of science—too often lead researchers astray. “The real lesson to be learned from this is not that ESP exists, it's that the methods we're using aren't protecting us against spurious results,” says David Krantz, a statistician at Columbia University.

    Bem says his interest in ESP sprang from his sideline as a stage magician, which taught him to design tricks that made it look as if he had psychic powers. “I knew how to fake it, but I didn't believe it was real,” he says. But an invitation to perform for the Parapsychology Association and a subsequent visit with a parapsychology researcher in the mid-1980s prompted him to review the research on the topic. Impressed by what he saw as a number of strong findings, he began experiments of his own.

    In the new study, Bem says his goal was to design simple experiments, many of them based on common lab tests used by psychologists. In one experiment, subjects saw two curtains on a computer screen and had to guess which of the two had an erotic picture behind it. In this experiment and seven others, Bem found statistically significant evidence suggesting his subjects had unconscious knowledge of future events. For example, subjects picked the correct curtain about 53% of the time, and a standard statistical test (a t test) indicated a p-value of less than .01, well below the .05 threshold typically used to determine statistical significance. Bem says he intentionally stuck to familiar statistical methods: “If you use fancy statistics, people think you're hiding something in the weeds.”

    But statisticians have long argued that there are problems lurking in the weeds when it comes to standard statistical methods like the t test. What scientists generally want to know is the probability that a given hypothesis is true given the data they've observed. But that's not what a p-value tells them, says Adrian Raftery, a statistician at the University of Washington, Seattle. In Bem's erotic photo experiment, the p-value of less than .01 means, by definition, that there's less than a 1% chance he would have observed these data—or data pointing to an even stronger ESP effect—if ESP does not exist. “Some people would turn that around and say there's a 99% chance there's something going on, but that's wrong,” Raftery says.

    Not only does this type of thinking reflect a misunderstanding of what a p-value is, but it also overestimates the probability that an effect is real, Raftery and other statisticians say. Work by Raftery, for example, suggests that p-values in the .001 to .01 range reflect a true effect only 86% to 92% of the time. The problem is more acute for larger samples, which can give rise to a small p-value even when the effect is negligible for practical purposes, Raftery says.

    He and others champion a different approach based on so-called Bayesian statistics. Based on a theory developed by Thomas Bayes, an 18th century English minister, these methods are designed to determine the probability that a hypothesis is true given the data a researcher has observed. It's a more intuitive approach that's conceptually more in line with the goals of scientists, say its advocates. Also, unlike the standard approach, which assumes that each new experiment takes place in a vacuum, Bayesian statistics takes prior knowledge into consideration.

    That's important when the effect in question is something like ESP, for which prior knowledge of physics and biology suggests no possible mechanism, says Eric-Jan Wagenmakers, a mathematical psychologist at the University of Amsterdam in the Netherlands. He says the bar should be higher for such “extraordinary claims.” Wagenmakers and three colleagues have written a critique of Bem's paper that will appear in the same issue of JPSP. It includes a Bayesian reanalysis of Bem's data that concludes that, if anything, they support the hypothesis that ESP does not exist.

    Jeffrey Rouder, a quantitative psychologist at the University of Missouri, Columbia, says he agrees with the gist of Wagenmakers's critique but thinks his reanalysis understates the evidence. Rouder and a colleague have done their own Bayesian reanalysis of Bem's data, which they plan to submit to another journal. Unlike Wagenmakers, who considered each of Bem's experiments independently, Rouder pooled the evidence for ESP across all nine experiments to come up with a “Bayes factor” of 40. That number, which Rouder considers an upper limit, gives people an indication of how much they should adjust their prior beliefs about ESP, he says. For example, a skeptic who thinks the odds are 1 in a million should update his odds, at most, to 40 in a million; someone who believes ESP is twice as likely to exist as not, should raise his odds to 80 to 1. The bottom line, Rouder says, is that Bem's evidence is worth considering, but unlikely to convert a skeptic.

    Because Bayesian statistics requires information about the prior odds that a hypothesis is true, critics contend that the methods are too subjective because researchers can set the prior odds higher or lower to produce the outcome they want, notes Jessica Utts, a statistician at the University of California, Irvine. But that flexibility may actually be an advantage in contentious areas like parapsychology, Utts says: “Everybody has a prior opinion on this.” (Utts has argued previously that the statistical evidence for ESP-like phenomena is strong.) The Bayesian approach can tell people how much they should be swayed by new data, regardless of their initial beliefs, Utts says. She's collaborating with Bem to reanalyze his data with Bayesian statistics.

    Enthusiasm for Bayesian statistics has come and gone over the years. They've been used in many fields from astronomy to clinical trials (Science, 19 November 1999, p. 1460) but have replaced the standard statistics in only a few. Sociology, which often deals with large sets of survey data for which p-values can be especially treacherous, is one area in which Bayesian methods are widely used, says Raftery, who works in sociology. He hopes the publicity around the Bem paper will encourage researchers in other fields to consider using Bayesian methods in their work.

    But no statistical method can safeguard completely against erroneous results, and none can substitute for clear thinking, cautions Krantz: “One can't expect statistics to do the job of human inductive inference.”

  5. Scientific Publishing

    Open Access Gains Support; Fees and Journal Quality Deter Submissions

    1. Gretchen Vogel

    BERLIN—Scientists love open-access papers as readers, but they are less enthusiastic about submitting their papers to open-access journals, according to a new study. The European Union–sponsored Study of Open Access Publishing (dubbed the SOAP project) surveyed 50,000 researchers last year for their opinions on open-access journals, which make all their papers freely available online and usually charge authors a fee for each published paper.

    The study found overwhelming support for the concept, with 89% of respondents saying that open access is beneficial to their field. But that support didn't always translate into action: Although 53% of respondents said they had published at least one open-access article, overall only about 10% of papers are published in open-access journals. “We cannot ignore this gap anymore,” says Salvatore Mele, project leader for open access at CERN near Geneva, Switzerland, and a member of the study team.

    The study, which released its results last week at a symposium here, found two main reasons researchers don't submit to open-access journals: Almost 40% said that a lack of funding for publication fees was a deterrent, and 30% cited a lack of high-quality open-access journals in their field. For the majority of scientists, “journal quality and impact factor is most important—not [open access]—when deciding where to submit,” says Peter Strickland of the International Union of Crystallography, which publishes the fully open-access Structure Reports Online as well as seven subscription-based journals.

    The study also found that open-access journals are proliferating, especially among small publishers: One-third of open-access papers were published by the more than 1600 open-access publishers that publish only a single journal. Several hundred new open-access journals are being launched each year, noted Caroline Sutton of the Open Access Scholarly Publishers Association in The Hague. “It's really difficult to launch a new subscription-based journal, but the open-access fee model is scalable,” she says. “As you receive more submissions and publish more papers, you get more fees.”

    Large publishers are also catching on. The study identified 14 that publish either more than 50 journals or more than 1000 articles per year, accounting for roughly one-third of open-access publications. Several other large scientific publishers have announced new open-access journals in the past 6 months, notes Mark Patterson, director of publishing at the Public Library of Science (PLoS). All are modeled on PLoS ONE, the publisher's biggest journal (and main revenue generator). The journal publishes papers after peer review in which experts check for scientific rigor but not overall importance.

    Mele says the entire data set and the team's analysis, as well as videos of the symposium, will be available this week via the SOAP project Web site (http://project-soap.eu/).

  6. ScienceInsider

    From the Science Policy Blog

    Achilleas Mitsos has resigned as head of research and technology within the Greek education ministry. Mitsos, a former E.U. research chief, had hoped to set up a peer-reviewed grant system. But a year after Mitsos took the post, no money has been awarded and only one call for applications has been issued. Mitsos said in his resignation letter that disagreements with the deputy minister of education, Yiannis Panaretos, prevented him from doing his job.

    The World Health Organization (WHO) has released a plan to tackle the growing problem of resistance to the key malaria drug artemisinin. The multipronged strategy, which would cost $175 million, involves confronting the spread of resistant pathogens in Cambodia, where it has first appeared, with better monitoring and malaria control. Scientific tasks include finding molecular markers for resistance, speeding up the development of alternative drugs, and studying how mass screening and treatment might eliminate malaria in places where resistance is found.

    A meeting last week of more than 200 climate scientists in Tsukuba, Japan, marked the official start of the 2013 Working Group II report for the Intergovernmental Panel on Climate Change (IPCC). Updated procedures for the panel, which focuses on climate impacts and is one of three components in the overall report, include new guidance for authors grappling with contentious issues such as using non-peer-reviewed sources and describing areas of uncertainty.

    Two students were murdered earlier this month near a manatee study along the coast of Colombia. “My idea about the [security situation] and how that has improved conditions for field biology just changed radically,” said Carlos Daniel Cadena, a prominent ornithologist at the University of the Andes in Bogotá, which both victims attended. One of the students was scheduled to begin a 6-month research project on manatees.

    For more science policy news, visit http://news.sciencemag.org/scienceinsider.

  7. Marine Biology

    Killer Whales Earn Their Name

    1. Virginia Morell

    Biologists are recognizing the dramatic impact that some killer whales have on everything from endangered whales to penguins.

    Ready to rumble.

    A pod of mammal-eating killer whales on the prowl at Unimak Island.

    CREDIT: LANCE BARRETT-LENNARD

    One morning in May 2006, while onboard a boat anchored at Unimak Island in the Gulf of Alaska, cetacean biologists Lance Barrett-Lennard and Craig Matkin spotted something unusual: half a dozen gray whales (Eschrichtius robustus) milling about. They were migrating mother-and-calf pairs heading northwest from their birthing lagoons in Baja California, Mexico. The mothers were surely exhausted; they had eaten very little for 2 to 3 months and were still nursing their young. Now, nearly at the end of their 2000-kilometer journey, they should have been steaming through a narrow channel in the Aleutian Islands into the food-rich Bering Sea. What was holding them up?

    “We dropped a hydrophone in the water,” says Barrett-Lennard of the Vancouver Aquarium in British Columbia, Canada. “Sure enough, there were the feeding calls of killer whales.” The sounds came from the northwest, just where the gray whales needed to go. The biologists took their boat to the site and found a frenzied pod of orcas (Orcinus orca) feeding on the carcass of a gray whale calf. “That's when it hit home: the huge effect the killer whales have on the grays,” says Barrett-Lennard. Not only had the orcas killed a calf, they were blocking the other grays' migration.

    “Every migrating gray whale faces this gauntlet of killer whales,” says Barrett-Lennard, who, with his colleagues, reports on the predatory tactics of this previously unstudied population of mammal-eating orcas in this week's issue of Marine Ecology Progress Series.

    “We're only just beginning to understand how killer whales affect their prey's numbers, distribution, behaviors, and evolution—as well as the overall marine ecosystem,” he says. For example, the researchers documented that such calf carcasses end up feeding a host of other species: seabirds, bald eagles, fish, and sleeper sharks in the sea, and Alaskan brown bears and red foxes on shore.

    Danger Zone.

    A mammal-hunting killer whale surfaces in the shadow of Mt. Shishaldin, Unimak Island, along the Eastern Pacific gray whale migration path. Killer whales may take more than 33% of the annual gray whale calf population.

    CREDIT: CHRISTOPHE GUINET

    This study comes at a time when scientists are rethinking the role and effect of killer whales in marine communities, including the possible threat they pose to some of their endangered cetacean cousins. Researchers are now recording killer whale attacks on several species of great whales, as well as seals, narwhals, sea lions, walruses, and even penguins. From Russia's Chukotka Peninsula to South Africa's coastal waters to the icy seas of Antarctica—even in Canada's Hudson's Bay, where they had rarely been seen before—scientists are finding mammal-eating killer whales on the prowl.

    “There was a long-held belief that killer whales didn't eat the large whales to any great extent, but that is weakening because of these new, eyewitness studies,” says Alan Springer, a marine ecologist at the University of Alaska, Fairbanks, and the lead author of a controversial hypothesis that certain types of killer whales shifted their diet from whales and turned to Steller sea lions and then to sea otters after humans hunted the large whales nearly to extinction (Science, 4 April 2008, p. 44). “We're still dealing with the legacy of industrial whaling and sealing,” explains Robert Pitman, a marine biologist at the Southwest Fisheries Science Center in San Diego, California. “People used to think that because of the [1972] Marine Mammal Protection Act the numbers of whales, seals, and sea otters were just going to keep increasing. That's not necessarily the case. We need to think ecologically—and that means we have to add killer whales to the mix.”

    Picky eaters

    Certain killer whales, including all in captivity, eat fish. But whalers in the 19th and early 20th century knew that some killer whales (which are actually members of the Delphinidae family, the marine dolphins) ate other whales; they wrote about orcas boldly swimming in to feed on whale carcasses tied up to whaling vessels. In Eden, Australia, one family of whalers relied on a local pod of orcas to tell them via enthusiastic tail slaps that humpbacks or other whales were near shore, according to the Eden Killer Whale Museum. But these intelligent hunters are hard to observe and extremely mobile; killer whales are the most widely distributed mammal on Earth, except for humans. Only in recent years have scientists begun to appreciate the extent and impact of their predation on other marine mammals.

    Remains of the feast.

    Mammal-eating killer whales, such as this one in the Antarctic, prey primarily on seals and other cetaceans, but also eat penguins.

    CREDIT: ARI FRIEDLAENDER

    This new appreciation stems in part from the emerging realization that killer whales are picky eaters. Different populations specialize on certain prey, to the point that scientists now classify orcas into different ecotypes, with distinct ecologies, morphologies, and genetics, all dependent upon their prey preference. For instance, in the Pacific Northwest, researchers who have observed and tagged orcas say that three types coexist: the near-shore residents, which feed only on fish, primarily salmon; the highly mobile transients, which prey on marine mammals; and the offshores, whose habits are poorly known but which eat sharks, halibut, and possibly other fish. Each ecotype is genetically and socially isolated from the other, and each sticks strictly to its preferred prey. They even look different: Transients are larger than the fish-eating residents, are often heavily scarred, and make different vocalizations. (In 1970, before these differences were understood, a captured transient from British Columbia refused fish in captivity for 75 days and died.)

    Orca ecotypes have been identified in many parts of the world. In different parts of the pack ice in Antarctica, Pitman and his team have identified three ecotypes—specialists on minke whales, seals, and fish, respectively—as well as a “dwarf form” of killer whale, which was identified in 2007. In a 2009 paper in Molecular Ecology, other researchers identified two ecotypes in the North Atlantic: herring and mackerel feeders, which occasionally kill seals and have worn teeth, and strict mammal eaters with almost no tooth wear. And in a study released this month in Aquatic Biology, scientists report observing offshore orcas killing and eating 16 Pacific sleeper sharks (Somniosus pacificus), the first confirmed prey for this ecotype. Scientists are still gathering data to determine whether the types should be treated as separate species. And although they estimate that there are 40,000 to 60,000 orcas worldwide, they don't know the populations of each ecotype, or how their numbers may have fluctuated in the past.

    Pow!

    Orcas ram a minke whale near Unimak Island (on map, above) on the gray whales' migration path through the Aleutian Islands.

    CREDIT: DAVE ELLIFRIT/NOAA ALASKA FISHERIES SCIENCE CENTER UNDER PERMIT 782-1719/NATIONAL MARINE FISHERIES

    All orca watchers marvel over how skillfully each type takes (and fillets) its preferred prey, and their unique hunting strategies, which are thought to be learned behaviors. Antarctic seal killers hunt almost exclusively by working together to create waves to wash seals into the water, for example, says Pitman. Those that hunt whales use other tactics, such as avoiding echolocation (something the fish eaters rely on), and ramming prey with their heads while swimming at full speed. For example, at 7 a.m. one morning in 1997, Pitman watched in disbelief as 35 orcas decimated a pod of nine sperm whales (Physeter macrocephalus) off the coast of California. The sperm whales had formed a tight rosette with their heads pointing in and tails out. But the orcas had their way, biting and ramming the larger whales. Prior to that attack, scientists thought sperm whales, with their toothed jaws and deep-diving abilities, were invincible to killer whales. “That was my first inkling that killer whales have the means to change things in their environment,” says Pitman. “This could be happening to the gray whales [now]. We may end up with significantly fewer gray whales.” During the gray whale migration, orcas are often seen killing gray calves in Monterey Bay, California, and in Pacific coastal waters from Washington to southeast Alaska.

    Battle of the titans.

    A team of orcas attacks a juvenile gray whale near Unimak Island; this one got away but may have died of its injuries later.

    CREDIT: JOHN DURBAN/NORTH GULF OCEANIC SOCIETY

    Following a lead from a fisher, Barrett-Lennard's team headed to Unimak Island in 2002. “On our very first morning, 13 killers swam right by the dock, and we saw 40 swimming together the next day. It was more than I'd ever seen together,” says Barrett-Lennard. The team started to collect skin biopsies via darts as well as a photographic record of the orca population, which now stands at 150 known individuals. Three of these orcas were later photographed in Alaska's Pribilof Islands, where they killed and ate a northern fur seal.

    That first season, Barrett-Lennard's team witnessed their first killer whale pod attack on a gray whale calf. As the orcas chased the gray toward the beach, they left the scientists' boat, which could only travel at 7.5 knots, “in the dust.” When the researchers caught up, they saw a slick of oil, a typical sign of an injured or dying whale, and scraps of blood and blubber on the water's surface; seabirds and bald eagles were already dining on the scraps. For a moment, the calf 's rostrum poked up from the water, then sank again as the killer whales pulled the carcass around.

    During the 102 days (spread over 4 years) the scientists spent in the field, they saw seven more attacks, including three kills. Teams of three or four orcas work in tight coordination to herd their prey into water about 30 meters deep, while attempting to separate the calf from its mother, just as wolf packs do when pursuing bison or elk. Some of the 8-meter-long orcas then bite at the 10-meter calf 's pectoral fins to pull it below the surface, while another swims onto its head and over its blowhole to drown it. They also bite at its throat and tongue, which can cause the gray to die from loss of blood.

    Danger below.

    Steller sea lions keep a wary eye on mammal-eating killer whales.

    CREDIT: LANCE BARRETT-LENNARD

    By killing the calf in relatively shallow water, the orcas are able to feed on the carcass for several days. “It's like an underwater refrigerator,” says Barrett-Lennard. He notes that after feeding, the orcas may leave the site for 24 hours or more but then return, and he suspects that they are somehow caching their prey as leopards do, perhaps beneath rocky ledges. If so, it would be the first time such food-storing behavior has been discovered in marine mammals.

    Gotcha.

    A mammal-hunting killer whale in the Antarctica snags a crabeater seal.

    CREDIT: ARI FRIEDLAENDER

    The scientists mapped each site where they saw orcas harassing or killing grays and where oil slicks from carcasses had accumulated. They suspect that the sea floor by Unimak Island is littered with hundreds if not thousands of gray whale skeletons, a suggestion that has excited specialists in the organisms that colonize whale bones. “If 100 [whale] carcasses are accumulating each year,” it will be “definitely a hot spot for whale-bone communities,” says Craig Smith, a biological oceanographer at the University of Hawaii, Manoa, who hopes to explore waters off Unimak with a remotely operated vehicle. “We've found the same species of whale-bone-eating worms in Monterey Bay as there are in Japan, something that's puzzled us,” adds Robert Vrijenhoek, an evolutionary biologist at the Monterey Bay Research Institute in Moss Landing, California. “But if there are enough carcasses like these, the worms and other species could be using them like steppingstones to cross the Pacific.”

    The orcas weren't always successful in their attacks. They gave up if gray whale mothers were especially aggressive (a large whale can severely injure or kill an orca with a blow of its tail flukes), or if the grays managed to reach water less than 3 meters deep. Twice, the team saw grays free themselves by swimming into shallow waters while rolling. Another time, a gray whale mother and calf partially beached themselves to escape their predators.

    Bearing scars

    How much of an effect are the killer whales having on the gray whale population? Although the researchers saw only a few successful kills, Barrett-Lennard says they knew numerous others had occurred because of carcasses that washed up on shore and because they found killer whales lingering by oil slicks and surfacing with whale blubber in their mouths. Overall, based on the number of orca attacks seen, Barrett-Lennard says they are likely removing at least 33% of the gray whale calf population annually.

    Indeed, photographs of gray whale adults all show the telltale rakelike scars left by killer whale teeth. Barrett-Lennard and others think that nearly every gray calf finds itself in an orca's jaws at some point, as do many humpbacks and bowheads.

    The Eastern Pacific grays have largely recovered from whaling with a population of about 19,000 “and should be able to withstand that level of predation,” says Steven Swartz, a cetacean biologist with the Laguna San Ignacio Ecosystem Science Program in Baja California. “But it really depends on the number of breeding [gray] females,” which have recently been producing fewer calves for unknown reasons. Gray whale and killer whale researchers agree that the constant threat from the orcas has undoubtedly affected the gray whale's evolution and behaviors.

    Free lunch.

    The orcas' kill feeds bears, too.

    CREDIT: LANCE BARRETT-LENNARD

    Killer whales are probably the reason the gray whales hug the coastline while traveling, say Barrett-Lennard and his colleagues. Where killer whales are likely to be present, the grays adopt cryptic ways of swimming, barely surfacing to breathe. They also prefer traveling near or in kelp beds, where the kelps' gas bladders act as “acoustic reflectors, interfering with the calls of killer whales,” says Swartz.

    In other parts of the world, the hunting strategies of various populations of killer whales are just coming to light. In the Antarctic, they're now known to hunt penguins: In a 2010 study in Polar Biology, Pitman reported watching two pods of orcas kill gentoo and chinstrap penguins, while ignoring minke whales. He speculates that killer whales may have caused the unexplained 50% decline of emperor penguins in eastern Antarctica in the late 1970s. Killer whales also regularly kill humpback whale calves along the Queensland coast of Australia. Off the coast of Namibia, pods have been seen killing seabirds and were once observed to kill (but not eat) 268 juvenile cormorants over a 2-day span. In the Arctic, where climate change is thinning the pack ice that once kept them out, the mammal-hunting killers now have access to the rich marine mammal life in Hudson's Bay, says Steven Ferguson, a marine biologist with Fisheries and Oceans Canada in Winnipeg. “There has been an exponential increase in sightings, and we think they are causing trouble for the narwhals,” whose numbers are declining, and who are likely adapted to a life in the ice as a way to avoid the orcas. Ferguson's unpublished data suggest that during their 3-month annual visits, 25 orcas could kill about 5% (or almost 250 individuals) of the 5000-strong narwhal population. “It's something to consider in conservation plans for marine mammals,” he says.

    “There's a reason killer whales have this savage name,” says Springer, who notes that in the summer of 2008, researchers from the North Gulf Oceanic Society in Homer, Alaska, recorded the carcasses of eight humpback whales, some with their tongues torn out, killed by orcas in Alaskan waters. “Killer whales do kill whales.” And that could very well leave some cetacean biologists with a problem: Which whale should they root for?

  8. Nuclear Medicine

    Scrambling to Close the Isotope Gap

    1. Robert F. Service

    New technologies are needed urgently to assure the continued supply of radioactive materials essential for diagnosing and treating millions of patients around the world.

    Core strength.

    Belgium's BR2 reactor can produce one-quarter of the world's Mo-99, a key medical isotope.

    CREDIT: SCK·CEN, USED BY PERMISSION

    A year and a half ago, nuclear medicine physicians were hit with a double whammy. On 9 May 2009, the High Flux Reactor in Petten, the Netherlands, was shut down to fix corroded pipes. Ten days later, a heavy-water leak forced a shutdown at the National Research Universal reactor in Chalk River, Canada.

    The twin problems created a temporary shortage of technetium-99, a radioisotope used in more than 30 million procedures a year worldwide for imaging everything from blood flow through the heart to bone cancer. Physicians were forced to use less Tc-99 for many procedures, ration what scant supplies remained, and find less desirable substitutes.

    The reactors came back on line by fits and starts and were both running again by late 2010. But the return to normalcy—the two reactors produce 60% of the world's radioactive molybdenum-99, which decays into Tc-99—may not last long. With both the Chalk River and Petten reactors decades beyond their intended life expectancy, “the current global Mo-99 supply infrastructure is fragile and aging,” says Parrish Staples, who directs the office of European and African Threat Reduction for the National Nuclear Security Administration's (NNSA's) Global Threat Reduction Initiative in Washington, D.C.

    The situation isn't just a problem for doctors and patients. Governments around the world are working to phase out civilian uses of the technology to produce nearly all Mo-99 today because of concerns that the highly radioactive material used in the process could be diverted to make nuclear weapons. And finding replacement technologies to produce the Mo-99, and companies willing to take the financial risk of generating it, is proving challenging. As a result, “the clock is ticking for the crisis to reoccur,” says Staples.

    Less is more

    In one sense, the crisis is a sign of how important nuclear medicine has become in diagnosing and treating diseases. The discipline debuted after the first reactors were built in the 1940s and 1950s and grew slowly before taking off sharply in the past 2 decades. The number of nuclear medicine procedures has tripled since 1996, to well over 30 million a year today. A dozen different radioactive isotopes are now in standard use, and another dozen are in advanced development. By far the biggest players are Tc-99 and fluorine-18, used in positron emission tomography (PET) for brain scans.

    The widespread use of Tc-99 has put enormous pressure on a handful of reactors. Just five reactors—Chalk River, Petten, and facilities in Belgium, France, and South Africa—produce more than 99% of the world's Mo-99. Demand is measured in units called 6-day curies, referring to the isotope's half-life. Today, medical imagers use roughly 12,000 6-day curies of Mo-99 worldwide annually, a number increasing between 1.5% and 2.5% a year, according to Steve McQuarrie, a medical physicist at the University of Alberta in Edmonton, Canada, who spoke at a meeting of the American Nuclear Society (ANS) on the topic in November 2010. The United States, which accounts for roughly half of that demand, has for years imported all of its medical Mo-99 and other medical isotopes.

    When operating at full tilt, the current reactors can keep up with this demand. But in addition to being hobbled by their age and need for frequent maintenance, most make their Mo-99 using weapons-grade uranium. This uranium is enriched in uranium-235, 6% of which decays into Mo-99 during fission. Last April, the United States and 46 other countries signed an agreement to phase out this highly enriched uranium (HEU) for civilian uses to reduce proliferation concerns.

    Now engineers at the facilities are scrambling to determine how to continue to make Mo-99 and other key medical isotopes with low enriched uranium (LEU) and other alternatives that don't require fission. On the HEU-to-LEU conversion, there has already been good news. Earlier this year, NNSA announced that it had awarded $25 million to the Nuclear Energy Corp. of South Africa (Necsa) to help it retool its Safari-1 reactor to produce Mo-99 using LEU targets.

    Reactors normally make Mo-99 by firing neutrons at plates coated with uranium, part of which turns into Mo-99. Technicians then dissolve the plates in acid and chemically separate out the molybdenum. The process is fraught with pitfalls. Both Mo-99 and Tc-99 decay too quickly to be stockpiled, so new targets must be delivered continuously. The amount of U-235 in the targets—as much as 90% at HEU facilities—raises concern that some of it could be diverted for weapons. To make matters worse, the chemical processing needed to recover the Mo-99 generates high-level radioactive waste, which poses proliferation risks of its own.

    Fragile and aging.

    The small number of production sites makes consistent Mo-99 supplies vulnerable.

    CREDIT: M. TWOMBLY/SCIENCE

    The Safari-1 reactor used targets enriched only to about 45% U-235. Switching to LEU enabled it to run on 20%. In July, Necsa delivered the first commercial-sized Mo-99 shipment to the United States for a series of quality tests. And last month, after receiving the U.S. Food and Drug Administration's approval, a Necsa subsidiary made its first commercial shipment of LEU-produced Mo-99 to Lantheus Medical in North Billerica, Massachusetts.

    Safari-1's success is good news, but Staples cautions that the solution isn't a panacea. To compensate for lower concentrations of U-235, which produces less molybdenum, targetmakers are using more uranium overall, are making targets denser, and are trying different metal alloys in the targets. But such changes also change the chemical-separation process needed to recover the Mo-99. “Each [reactor] is more or less a unique facility,” Staples says. “It's difficult to just transition the solution in South Africa to other reactors, because they use different separation chemistries.”

    Beyond fission?

    Retooled Mo-99 facilities aren't the only ones battling for a piece of the business. Another strategy would revive a decades-old reactor technology called aqueous homogeneous reactors (AHRs). Whereas most nuclear plants today run on solid uranium fuel rods, AHRs are fueled by a uranium salt in an acidic solution. Last year, NNSA gave $9 million to Babcock and Wilcox (B&W) Technical Services Group in Charlotte, North Carolina, to work on ways of using AHRs with LEU to generate the neutrons needed to produce Mo-99.

    Frank Hahne, B&W's director of business development, says the company's current design is about the size of a 55-gallon (200-liter) drum. One advantage, Hahne says, is that Mo-99 produced in solution can easily be extracted with “wet” chemical techniques, whereas typical solid targets must be dissolved in acid first. Hahne says the company has completed its conceptual design and licensing through the Nuclear Regulatory Commission and plans to start construction in 2012, with production of Mo-99 in 2014.

    Another option does away with fission altogether. Researchers place targets enriched with Mo-98—another naturally occurring isotope—in a nuclear reactor that produces a high flux of neutrons. When Mo-98 atoms in the target absorb neutrons, they are converted to Mo-99, which can then be separated out.

    Last year, NNSA awarded $2 million to GE Hitachi Nuclear Energy in Wilmington, North Carolina, to speed up efforts to commercialize Mo-99 production through neutron capture. At the ANS meeting, Jennifer Varnedoe, an engineer with GE Hitachi, said scaling up the approach could provide half of the Mo-99 used in the United States.

    Producing Mo-99 by absorbing neutrons eliminates the need for U-235. The separation process also generates far less radioactive waste than traditional methods do. It does require extremely powerful neutron sources, however, such as the High Flux Isotope Reactor at Oak Ridge National Laboratory in Tennessee. Those sources are already in heavy demand for neutron scattering work and other scientific techniques. So GE Hitachi is also exploring the possibility of carrying out its neutron capture work at commercial nuclear power reactors.

    Mo-98 neutron capture also requires new equipment to transport the reaction products, which contain lower concentrations of Mo-99 than those created by the traditional fission process. Once produced, Mo-99 is loaded onto an affinity column in a technetium “generator” and sent to hospitals. There, the Tc-99 is eluted from the column as the Mo-99 decays. A lower Mo-99 concentration would require larger columns, says Robert Atcher, a radiopharmacist at Los Alamos National Laboratory in New Mexico who directs the U.S. Department of Energy's (DOE's) virtual National Isotope Development Center. But because the current generators are standardized, the changeover would likely be costly for Tc generator companies as well as for the hundreds of radiopharmacies around the world.

    Instead of adding a neutron to Mo-98, scientists can also make Mo-99 by removing a neutron from Mo-100, a stable isotope that makes up about 10% of natural molybdenum. Researchers at the Institute for National Measurement Standards in Ottawa, Canada, are using a room-sized electron linear accelerator to fire energetic electrons through a tungsten target. The interactions produce gamma rays that can knock a neutron out of Mo-100 atoms, transmuting them into Mo-99.

    At the ANS meeting, Raphael Galea, a radiation metrology specialist with the institute, reported that he and colleagues had demonstrated all of the steps needed to convert Mo-100 to Mo-99. The researchers calculate that the process could meet all of Canada's demand for Mo-99 and Tc-99 with just two electron accelerators, at costs below what the market charges today. In addition to the Canadian group, in October 2010, NNSA awarded $500,000 each to two Wisconsin companies—NorthStar Medical Radioisotopes LLC and Morgridge Institute for Research—to pursue variations on the use of accelerators to produce Mo-99.

    The biggest remaining downside to converting Mo-100 is scale. Canada uses only one-tenth as much Tc-99 as the United States. Producing enough Mo-99 for the United States would require about 54 linacs, says Yaron Danon, a nuclear physicist at Rensselaer Polytechnic University in Troy, New York, who has been working on a similar Mo-100 transmutation effort. Having a distributed network of Mo-99 producers would prevent a crisis if any one facility had to shut down. But the logistics of building such a large network could be complicated. “So it may make more sense [for Canada] than for the U.S.,” Danon says.

    Perils of success

    Several other Mo-99 production schemes are also under investigation, including one that would use medical cyclotrons that currently make radioisotopes for PET. With so many candidate technologies in the running, says J. David Robertson, a radiochemist at the University of Missouri, Columbia, “I'm pretty confident one or more will pan out.”

    Once the technical side falls into place, the policy questions move to the forefront, in particular, who will build the new facilities and at what cost? Historically, medical isotopes have been produced at government-sponsored research reactors. Last year, the International Atomic Energy Agency (IAEA) released an economic analysis showing that companies were at a competitive disadvantage because of the millions of dollars in government support for research reactors worldwide. With aging reactors going off line and governments looking for ways to cut costs, however, those subsidies could soon end. Either way, the IAEA study recommended that governments move to charge full price for their Mo-99 production services in order to give companies an incentive to develop technologies capable of providing Mo-99 at a cheaper price. A very different sort of problem could arise if multiple technologies succeed and the world winds up with a glut of Mo-99. That could cause prices to crash, forcing some suppliers out of business.

    As a way through some of these issues, the last U.S. Congress passed legislation authorizing DOE to spend $165 million on the development of new Mo-99 production technologies. Then-Senator Kit Bond (R–MO) blocked a similar bill in the Senate out of concern that it could disrupt HEU shipments from the United States to other countries. But Bond has retired, and his departure bolsters the chances that the new Congress will approve alternative Mo-99 production technologies.

    With Mo-99 production technology, financing, and policy all up in the air, many nuclear medicine experts are concerned. In July 2010, a coalition of nine professional nuclear medicine organizations suggested that the combination of potential changes threatens to put patients worldwide in harm's way. They concluded: “Forcing a change to a new, and as yet unproven, technology without proper research and development, or regulatory and financial support will most certainly cause harm.”

  9. Nuclear Medicine

    A Field Back in Vogue

    1. Robert F. Service

    The Department of Energy and the Nuclear Regulatory Commission have launched a series of programs to support training and education in nuclear sciences. Enrollments have nearly doubled since 2004, but experts say that's only enough to catch up with the demand of the last decade.

    Radioisotopes aren't the only bottleneck confronting nuclear medicine. For years, the United States has struggled to train enough nuclear engineers, radiochemists, and medical physicists to keep the field healthy.

    Rebound.

    More than a decade of support programs have helped bolster the number of students earning degrees in nuclear engineering.

    CREDIT: ADAPTED FROM U.S. DOE NUCLEAR ENERGY UNIVERSITY PROGRAMS BASED ON DATA FROM OAK RIDGE INSTITUTE OF SCIENCE AND EDUCATION

    Students shunned nuclear sciences in the wake of the nuclear accidents at Three Mile Island in Pennsylvania in 1979 and Chornobyl in 1986. By the mid-1990s, only 600 nuclear engineering students were enrolled in graduate and undergraduate programs in the United States—down two-thirds from a decade earlier, says John Gutteridge, who runs the education grants program for the Nuclear Regulatory Commission (NRC) in Bethesda, Maryland. Much of that decline probably occurred because fewer nuclear plants were being commissioned and built. Even so, a mid-1990s study by the University of Michigan suggested an annual shortfall of 400 nuclear engineers needed to keep up with attrition and retirement in the field.

    Among radiochemists, the output of Ph.D.s dropped from between 30 and 40 per year in the 1970s to as few as three in some years in the early 2000s. “I would say the situation was desperate,” Gutteridge says. Now numbers are up again, and Gutteridge and others say recently minted doctorates have been awash in job offers. “Our Ph.D. graduates are snapped up right away,” says John Gilligan, a nuclear engineer at North Carolina State University in Raleigh, who also directs the Department of Energy's (DOE's) Nuclear Energy University Programs integration office.

    To combat the downward trend, beginning in 1998 DOE launched a dozen different grant programs amounting to about $30 million a year to support different constituencies within nuclear sciences: universities, faculty members, and Ph.D., master's, and bachelor's students. Those programs continued until 2007, when DOE officials in the Bush Administration decided to shift the money elsewhere. In 2008, NRC launched its own program to support education and training in the field at about $20 million a year. And more recently, the Obama Administration and Congress restarted the DOE programs, which now spend $50 million to $80 million a year on student, faculty, and research support.

    The rebound has been dramatic: Enrollments in undergraduate and graduate nuclear engineering programs have nearly doubled since 2004, to 4752 in 2010. As many as 30 Ph.D.s in radiochemistry are likely to be awarded in 2011, R. Craig Williamson, who directs the South Carolina Universities Research and Education Foundation in Aiken, told at a November 2010 meeting of the American Nuclear Society in Las Vegas, Nevada.

    Crisis averted? “By no means,” says Rolf Zeisler, a radiochemist with the National Institute of Standards and Technology in Gaithersburg, Maryland. “Right now we are catching up with the demand of the last decade.” J. David Robertson, a radiochemist at the University of Missouri, Columbia, cites health physics—a training ground for radiation safety officers—as one area of concern. But he and others agree that, like the production of medical isotopes, the training of students in the field has stepped back from the abyss.

  10. Society For Integrative And Comparative Biology

    Soccer and the Art of Deception

    1. Elizabeth Pennisi

    In soccer, goals have been scored and games won when free kicks were wrongly awarded because players faked collisions with opponents. A group of researchers reported at the meeting that they have found empirical support in the antics of footballers for game-theory predictions about dishonest signals.

    Faked fall?

    Near goals, referees are more easily tricked by players pretending to have been fouled.

    CREDIT: THINKSTOCK

    Time was when footballers (soccer players, to Americans) prided themselves on playing tough. They took their knocks and played on without complaint. But these days it's common to see a player fake a collision with an opponent and writhe on the ground in hopes of getting a free kick. Such “dives” are “very frustrating to watch,” says Christopher Rose of James Madison University in Harrisonburg, Virginia, and goals have been scored and games won on wrongly awarded free kicks. To Gwendolyn David, Robbie Wilson, and Daniel Ortiz-Barrientos, however, the deceptive practice provided a unique research opportunity. Working at the University of Queensland in Brisbane, Australia, they have found empirical support in the antics of footballers for game-theory predictions about dishonest signals. And they have come up with ways the game might be improved. “It's a very clever way of testing the predictions of signaling theory,” says Simon Lailvaux, an integrative biologist at the University of New Orleans in Louisiana. “Finding a system where you can actually quantify or categorize deceptive signals is very exciting.”

    Biologists have long argued about how dishonesty arises and persists in a population. “Game theory suggests that most signals should be honest”; otherwise, the receiver of the signal will stop paying attention, Wilson says. Consider the case of the boy who cried wolf too many times. But trickery can be very beneficial. A fiddler crab brandishing a supersized claw may get the mate even though the claw lacks the muscle needed to be effective. The bottom line is, if the reward is great enough, there's an incentive to fudge the truth.

    Game theory predicts that deception should occur at low frequencies but should increase in proportion to the benefit as well as to the degree to which the deceived party fails to discriminate between real and fake signals. But these tenets have had mixed support or have not been tested in real organisms, Wilson says. “There are still a lot of questions to be asked about how dishonesty can be maintained in a population,” notes physiologist Gary Wassmer of Bloomsburg University in Pennsylvania.

    Wilson and David, footballers themselves, decided to examine whether diving conformed to game-theory predictions. David surveyed falls from 60 games, 10 each from the Spanish, German, Australian, Dutch, Italian, and French leagues, replaying them on TV to decide which were dives. She classified them as legitimate, slightly deceptive (the player was touched by the opponent but exaggerated the consequences), or highly deceptive. She also tracked where the fall occurred on the field, at what point in the game it happened, the score at the time, and whether it was an at-home player who fell.

    As game theory predicts, legitimate falls far outnumber fake falls, Wilson reported at the meeting. Only 6% of the 2800 falls were highly deceptive dives. Players were two to three times as likely to dive when close to the goal, where the payoff was huge: Statistics show that there is an 80% chance of scoring from penalty kicks. Almost none of the highly deceptive dives resulted in free kicks against the diver. And referees were most likely to reward dives that occurred close to the goals—perhaps because the players were farther away and the deception harder to detect, he noted.

    “All leagues, and especially those with higher levels of diving, will want to consider increasing the frequency at which they punish divers for their deception,” says behavioral ecologist Timothy Wright of New Mexico State University in Las Cruces. Wilson also suggested that more referees could be positioned where dives are most likely to occur.

    Rebecca Safran, an evolutionary ecologist at the University of Colorado, Boulder, cautions that the work is really a study of cultural evolution and not the evolution of traits with a genetic basis. Nonetheless, notes Lailvaux, “I think people will be reading and thinking about this study for a long time.”

  11. Society For Integrative And Comparative Biology

    Getting to the Guts of Tadpole Carnivory

    1. Elizabeth Pennisi

    A developmental biologist reported at the meeting that inhibiting a single developmental signal—retinoic acid—can yield a carnivorelike gut morphology in a tadpole species that's normally mostly vegetarian.

    Known in the aquarium trade as Budgett's frog, Lepidobatrachus laevis is not your typical tadpole. Most frogs at this early stage of life are primarily plant eaters, only later metamorphosing into quick fly catchers. But from day one, Budgett's frog is a carnivore, swallowing whatever wiggles by, including fellow tadpoles. Its gut reflects this voracious lifestyle: It has a very large, well-defined stomach and a short intestine, quite the contrast from the long, winding tube of most tadpoles. Nanette Nascone-Yoder now knows how it gets that way.

    A developmental biologist at North Carolina State University College of Veterinary Medicine in Raleigh, Nascone-Yoder reported that the inhibition of a single developmental signal—retinoic acid—can yield the carnivorelike gut morphology in a tadpole species that's normally mostly vegetarian. That suggests that it didn't take much for the Budgett's frog tadpole to evolve its unusual eating habits. “Revealing the molecular mechanism contributing to evolutionary change in development is significant,” says Mihaela Pavlicev of the University of Oslo. But it's “particularly interesting that the change is relatively simple, involving a single factor [that explains] so much of what appears to be a complex morphological change.”

    Big gulp.

    A tadpole is being digested inside the enlarged stomach (lower right) of a Budgett's frog (upper right) tadpole.

    CREDIT: P. JANZEN; N. NASCONE-YODER/COLLEGE OF VETERINARY MEDICINE/NCSU

    Nascone-Yoder and her colleagues tested the effects of 200 small molecules on the development of the gut in Xenopus laevis embryos, a frog often studied in the lab. These embryos are transparent, so the digestive tract can be monitored as the embryo grows. The researchers applied the chemicals after the head, heart, and nervous system formed but before the gut started to appear.

    Several chemicals altered the Xenopus gut from a standard long, winding tube to a shorter one with more obvious stomach—and two were inhibitors of retinoic acid. They also reoriented the folding pattern of the intestine, shifting a right-left loop backward to give the stomach more room and center the liver and pancreas. “This looked remarkably like what we normally see in the carnivorous tadpole,” Nascone-Yoder said.

    Furthermore, when her group added retinoic acid to developing Budgett's frog embryos, the guts looked a lot like a Xenopus embryo gut, with their right-left loop shifted forward and the pancreas and liver being more asymmetrically positioned. They traced the positional shifts of the gut loops to a gene called Pitx2, whose pattern of expression was altered when they changed the amount of retinoic acid.

    “The evidence was quite convincing,” says Günter Wagner, an evolutionary developmental biologist at Yale University. But Scott Gilbert, a developmental biologist at Swarthmore College in Pennsylvania, wants more proof. “I am not prepared to say this is an important story until I know that it is actually tied in with actual gut changes,” he says.

    Meanwhile, Nascone-Yoder's colleague James Hanken of Harvard University is trying to figure out how these changes evolved. He suggests that the Budgett's frog tadpole may simply be fast-forwarding gut development, skipping the larval gut morphology altogether. If that's the case, then thyroid hormone, which regulates metamorphosis, is likely also involved. That's the next line of investigation for Nascone-Yoder and her team.

  12. Society For Integrative And Comparative Biology

    Turtles Are Not Just Drifters

    1. Elizabeth Pennisi

    Seven sea turtles with solar-powered satellite transmitters attached to their backs have demonstrated that they do more than simply follow the ocean currents as biologists had long assumed, researchers reported at the meeting.

    Tagged turtle.

    Satellite transmitters are now small enough for young loggerheads to wear.

    CREDIT: JIM ABERNETHY

    In southeastern Florida, silver dollar–sized sea turtles scramble down the beach after emerging from their nests and head out to sea. There they get caught up in the Gulf Stream and disappear, returning years later as adults. Biologists, who call this first part of a sea turtle's life the “lost years,” have long assumed that the turtles simply catch a ride in the circulating currents of the North Atlantic Ocean, which takes them north, east, down along northern Africa, and finally back around to Florida again. Now seven turtles have demonstrated that they do more than simply follow the currents.

    Jeanette Wyneken and Kate Mansfield of Florida Atlantic University, Boca Raton, collected loggerhead sea turtle hatchlings from local beaches. After much testing, they determined that once hatchlings reached 300 grams—about the size of a large grapefruit—the turtles could swim comfortably with a 15-gram solar-powered satellite transmitter attached to their backs.

    Wyneken and her colleagues released radio-toting turtles into the Gulf Stream offshore from their nests. The transmitters turned on for 8 hours every 2 days, relaying each turtle's location for up to 172 days before falling off. “This is very exciting work, as it's the first successful long-term tracking of neonate loggerhead sea turtles using satellite telemetry,” says Kristina Williams, director of the Caretta Research Project in Savannah, Georgia.

    Mansfield mapped the movements of the turtles together with satellite and oceanographic data on currents, plankton, and sea-surface temperatures. Instead of sticking to the fastest part of the Gulf Stream, the turtles often moved toward the edges, where they would hang out in eddies. One individual left the Gulf Stream completely and headed to Bermuda for 2 months before rejoining the Gulf Stream. The satellite information also showed that they all stayed in water above 17°C, Wyneken reported at the meeting. “This study shows them actively pursuing different routes after reaching the Gulf Stream, perhaps to areas with better food resources, instead of just floating wherever the currents guided them,” Williams says.

    These results fit in well with work by Nathan Putman, a behavioral ecology graduate student at the University of North Carolina, Chapel Hill, and his advisers, Kenneth and Catherine Lohmann. They have shown that turtles will orient to the magnetic fields they experience in the Atlantic Ocean, which suggests that they use the magnetic field to help guide them. And Putman has been testing virtual turtles in a computer model of ocean currents to see what paths they take. “The tracks of the real turtles of Wyneken and Mansfield looked a lot like the ones from my own research,” he says.

    Wyneken has recently released 10 more tagged turtles. You can follow their progress at http://scim.ag/sea-turtles.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution