News this Week

Science  18 Jun 2010:
Vol. 328, Issue 5985, pp. 1462
  1. Food Security

    Water Shortages Loom as Northern China's Aquifers Are Sucked Dry

    1. Li Jiao*

    BEIJING—When Luo Yiqi visited the Inner Mongolia region of northern China 3 years ago, he was in for a surprise. In recent years, overgrazed grasslands had withered and turned to desert. Luo had been expecting that. But what stunned the ecologist from the University of Oklahoma, Norman, were the rice fields along a desiccated riverbed. Farmers were pumping water from deep aquifers to cultivate one of the thirstiest crops on the planet. “Apparently, farmers did not get enough scientific guidance,” says Luo.

    Of all China's environmental woes, the biggest threat to livelihoods and food security may be looming water shortages. China's freshwater resources amount to 2220 cubic meters per person, just a quarter of the world average. For years, the central government focused on declining river flows and rising pollution, largely ignoring what has now become an acute problem: vanishing groundwater. “It was a question of ‘out of sight, out of mind,’” says environmental scientist Chen Jining, vice president of Tsinghua University here.

    The outlook is especially dire on the North China Plain (NCP), an area encompassing six provinces and the Beijing and Tianjin metropolitan areas. Over the past 40 years, NCP's water table has fallen steadily as some 120 billion cubic meters more water has been pumped from the land than the amount replaced by rainfall, says Liu Changming, a hydrologist at the Institute of Geographical Science and Natural Resource Research of the Chinese Academy of Sciences (CAS) here. Many wells are expected to run dry in the coming decades; when this happens, warns Lester Brown, president of the Earth Policy Institute in Washington, D.C., “China will lose the ability to feed about 10% of its 1.3 billion people.”

    On hostile ground.

    An engineer surveys a fissure caused by overexploitation of groundwater in northern China's Hebei Province.

    CREDIT: CNSPHOTO

    Until recently, the government was banking on a massive engineering solution. The $75 billion South-to-North Water Diversion Project, now under construction, would bring water from the Yangtze basin to the parched north (Science, 25 August 2006, p. 1034). But that remedy is no longer deemed sufficient. “Faced with an escalating crisis, the government is calling for stepped-up monitoring and scientific advice on groundwater management,” says Liu. This year, the Ministry of Science and Technology launched a $4.4 million, 5-year project to better understand China's subterranean water resources. The project “is the latest sign of the government's changing attitude,” says Pang Zhonghe, a hydrogeologist at CAS's Institute of Geology and Geophysics here. An even bigger effort should get under way soon: a $250 million initiative to drill scientific observation wells to monitor groundwater levels and quality.

    China has largely brought its water problems on itself. The classic novel Outlaws of the Marsh, written more than 600 years ago, describes the reedy wetlands of the Haihe River Basin, NCP's northern section. The fertile region has been a major target of development; augmenting water drawn from rivers, NCP went from 1800 powered wells in the 1960s to more than 700,000 such wells by 2000, says Liu. The Water Rush paid big dividends, such as turning NCP into the country's wheat and corn belts, says Liang Shunlin, a geographer at the University of Maryland, College Park, and Beijing Normal University.

    Across the Haihe Basin, each year about 50% more shallow groundwater is consumed than recharged by rainfall. And the surfeit of wells is exhausting deeper phreatic water in the saturated zone that takes much longer to replenish. As NCP is sucked dry, it is shriveling and fissuring, and subsidence now averages more than 1 meter. A few years ago, Liu's team, using radiocarbon dating, found to its astonishment that phreatic water now being drawn from NCP is as much as 30,000 years old. It would take that long, Liu says, to replenish the deep aquifer.

    To save water and maintain food security, researchers have proposed improved water conservation, better water pricing policies, and rational agricultural practices. Growing crops and raising livestock account for nearly 66% of China's total water consumption, Liu says. Water-saving measures in agriculture that could be implemented right away include drip and sprinkler irrigation and no-till farming, which reduces evaporation, says Xiao Xiangming, a landscape ecologist at the University of Oklahoma, Norman. Liang adds that researchers should strive to develop crops that lose less water to evaporation and better resist drought.

    The south-north diversion may provide temporary relief when the central and eastern lines are completed in 2014. “We can store some of the transferred water underground and take it when we need it,” says Qian Yi, a professor in the Department of Environmental Science and Engineering at Tsinghua University. But any water from the diversion project should be viewed as a “supplementary” resource, Liu says: “The first step is to use local water efficiently.”

    A comprehensive solution may depend on better information. Reliable data on aquifer storage, recharge, and water use are notoriously difficult to obtain, says Li Wenpeng, chief engineer at the China Institute of Geological Environmental Monitoring here. For a half-century, the government has relied heavily on data from nonstandardized farmers' wells. To address this shortcoming, the land and water ministries have devised a project, now in final review and expected to start this year, in which more than 20,000 monitoring wells would be drilled across the country, with a focus on northern regions. Each well would track water level, temperature, and water quality, and more than half would also test for pollutants and other contaminants. Remote sensing could augment this effort, Liang says.

    “It is time to take action,” says Yuan Daoxian, a hydrogeologist at the Institute of Karst Geology in Guilin, China. Unless the government acts quickly to rein in groundwater consumption, Liu adds, “the problem will be impossible to solve.”

    • Li Jiao is a writer in Beijing.

  2. U.S. Astronomy

    Arecibo to Stay Open Under New NSF Funding Plan

    1. Yudhijit Bhattacharjee

    Three years ago, the Arecibo Observatory in Puerto Rico was a telescope with a shining legacy and a dim future. An expert panel had just recommended that its chief supporter, the National Science Foundation (NSF), cut its funding sharply to make room for new astronomy projects. That step, the panel warned, could mean closing the facility—home to the world's largest radio telescope—in 2011 if it couldn't find other backers.

    Now, after much hand-wringing, NSF has decided to keep the observatory going until at least 2016 with help from the foundation's geosciences directorate and from NASA. The new funding partners will lead to more atmospheric science and asteroid tracking for the radio telescope—and a new lease on life.

    Stay tuned.

    Arecibo will keep monitoring the heavens.

    CREDIT: COURTESY OF THE NAIC, ARECIBO OBSERVATORY, A FACILITY OF THE NSF

    “We're relieved that closure is off the table,” says Donald Campbell, director of the National Astronomy and Ionosphere Center at Cornell University, which manages the observatory on NSF's behalf. Under the new funding arrangement, he says, “there will be a more even mix of atmospheric science, asteroid tracking, and astronomy at the observatory than in the past.”

    Built in 1963, the 300-meter telescope made possible storied discoveries such as detection of a binary pulsar system, which led to a Nobel Prize, and the first planets outside the solar system. But in November 2006, an outside panel led by astrophysicist Roger Blandford of Stanford University declared that Arecibo was not as scientifically important as other NSF-funded telescopes (Science, 10 November 2006, p. 904). The review recommended that the agency's astronomy division reduce its funding from $10.5 million in 2006 to $4 million and that Arecibo be closed in 2011 if it could not find $4 million from other sources by that date.

    Its dire fate has been averted by NSF's own Division of Atmospheric and Geospace Sciences (AGS) and NASA, as well as by modifying the recommendations of the review panel. The new funding plan was unveiled in a 29 April solicitation for proposals to manage the observatory after Cornell's contract ends next year. The solicitation (NSF 10-562) describes how AGS will increase its annual contribution to Arecibo from $2.3 million to $3 million in 2011 and reaching $4 million by 2015. Support from the astronomy division will drop from $8 million in 2010 to $6 million in 2011 and, eventually, to $4 million in 2015. NASA, which stopped contributing to the observatory in 2003, will chip in $2 million a year starting this fall.

    Blandford says he's glad that NSF has found a way “to keep a fine facility going.” He says the funding levels spelled out in NSF's solicitation seem to be “roughly following the spirit of our recommendations.”

    Maybe so, says Campbell, but the observatory traveled a bumpy road to get there. “We went through a 20% staff reduction after the review,” says Campbell. “Now the staff is overworked, definitely overextended, and we'll be struggling mightily to maintain our research programs.” The observatory received $3.1 million last year as part of the massive federal stimulus funding package, which helped to ease its pain, and the Puerto Rican government floated a $3 million bond to pay for needed maintenance. But scientists remain skittish. “Some of the astronomers are worried that our jobs could be on the line,” says one researcher who did not wish to be named. “There is still uncertainty.”

    The lifelines thrown by NASA and AGS could raise the profile of atmospheric studies and asteroid-detection efforts at the observatory, presumably at some cost to astronomical observations. A NASA press release says the agency expects Arecibo to allocate “at least 500 hours per year of telescope operations for planetary radar research supported by NASA, and at least 300 hours of that dedicated to near-Earth objects [NEO] research.” Last year's lineup devoted 300 hours to planetary radar, including 170 hours for NEOs.

    Robert Robinson, program officer for Arecibo in the AGS division, hasn't laid down any requirements for observing time. However, “we would expect to have more flexibility in the use of the telescope,” he says, including being given priority during geophysically interesting events like a solar storm.

    Cornell hopes to continue to operate the center. But it will have at least one competitor: SRI International, a nonprofit with a history of managing research and development projects, has said it also plans to bid for the 5-year contract.

  3. Energy Research

    Fusion 'Consolation Prize' Gears Up for Show Time

    1. Dennis Normile

    ROKKASHO, JAPAN—Strolling into a massive chamber surrounded by thick concrete walls, fusion engineer Pascal Garin beams like a proud parent as he describes the unique particle accelerator that will be assembled in this empty facility to test novel materials for future fusion reactors. In another vacant building across the campus, physicist Yoshikazu Okumura of the Japan Atomic Energy Agency (JAEA) boasts how the vast hall will soon host the fastest supercomputer in Japan; a stone's throw away, other labs will soon be off-limits to visitors as researchers start experiments on toxic and radioactive materials.

    These disparate facilities are all part of a plan to bring fusion energy a step closer to reality. They are proceeding in parallel with the single most important experiment for the future of fusion power, the $12 billion International Thermonuclear Experimental Reactor (ITER). Now under construction in Cadarache, France, ITER is meant to show that it is possible to harvest energy from magnetically contained plasma burning at immense temperatures. But ITER will not fully test fuel production for the plasma envisioned for future reactors, and it will not investigate the advanced materials they require. That's where the $900 million research effort here in Rokkasho, called the Broader Approach, comes in. The project, funded by Japan and six European nations, passed a major milestone on 27 April when its buildings opened on schedule. The Broader Approach is going “rather well,” Garin says. That stands in contrast to ITER, which is running late and facing a €1.4 billion shortfall (Science, 14 May, p. 798).

    Off the ground.

    Buildings to house Broader Approach research activities were recently completed in Rokkasho, at the northern tip of Japan's Honshu Island

    PHOTO CREDIT: JAPAN ATOMIC ENERGY AGENCY

    The Broader Approach is a child of ITER. In late 2004, ITER's partners at the time—China, the European Union, Japan, South Korea, Russia, and the United States—were split on whether to build the reactor at Cadarache or Rokkasho. Japan and the European Union both had offered to foot half the cost. During the tense stalemate, negotiators crafted a consolation prize for the runner-up. Fusion proponents had long envisioned a Fusion Demonstration Reactor (DEMO) as a steppingstone from ITER to commercial fusion power plant prototypes. But DEMO would require new radiation-resistant materials and extensive engineering studies not on the ITER agenda. “There is still limited knowledge of the physics and engineering requirements for fusion power plants,” says JAEA engineer Masanori Araki. Starting these efforts in tandem with ITER would speed the transition to DEMO. The Broader Approach was “a very clever” way to break the impasse, says Garin.

    The ITER partners finally tilted to Cadarache. Then in February 2007, Japan and Euratom agreed on a plan for the Broader Approach, under which they would share costs. The project has three ambitious programs. The first is to develop a new type of particle accelerator needed for a future facility to test materials for DEMO. Whereas ITER's vacuum vessel will be made of stainless steel, that won't do for DEMO, which must withstand far greater irradiation damage. Engineers are eyeing reduced activation ferritic steel, a kind of stainless steel that resists irradiation.

    To bombard this material with the radiation expected in DEMO, engineers want to build the International Fusion Materials Irradiation Facility (IFMIF), whose site has not yet been chosen. This facility will accelerate two beams of deuterons—nuclei of deuterium, a hydrogen isotope that will merge and smash into flowing liquid lithium, producing a neutron beam aimed at material targets. Whereas current accelerators rev up handfuls of particles to high energies, IFMIF will need a continuous stream of particles at a lower energy. Starting in about 2 years, the Broader Approach will develop and test key components of this future accelerator. Garin, project leader for the accelerator facility and related activities, expects experiments to be completed by 2015. To gain experience controlling the flow of liquid lithium, a test facility will be built at JAEA's labs in Oarai, about 100 kilometers northeast of Tokyo, where the staff has expertise handling highly corrosive materials. Meanwhile, engineers scattered around the world will work on the IFMIF design. As with ITER, the design must go forward even though IFMIF's site will be decided later.

    The second leg of the Broader Approach is the International Fusion Energy Research Centre, a grab bag of projects. One is a supercomputer designed to simulate and model ITER, DEMO, and other fusion reactors. It will also aid research on novel materials. Reactors would generate energy by fusing deuterium and a second hydrogen isotope, tritium. Deuterium is abundant in seawater, but tritium is rare. For DEMO and commercial reactors, tritium would be made in the reactor by capturing neutrons from the plasma in a so-called blanket lining the vacuum vessel. There, reactions will multiply the neutrons and steer them toward the lithium; the collisions will produce tritium to feed back into the plasma. This fuel cycle is critical to making fusion energy efficient, says Garin. Starting later this year, the center will begin to develop blanket module materials and radiation-resistant silicon-carbide composites that might supersede ferritic steel in commercial fusion reactors.

    Finally, the Broader Approach includes the Satellite Tokamak Program, which will fit Japan's aging JT-60 tokamak in Naka with superconducting magnets. The overhauled tokamak will be used to refine operational protocols “so that experiments on ITER go more smoothly,” says Shinichi Ishida, the project leader. They will also be “looking beyond ITER,” he says, at plasma experiments relevant to DEMO's design. The new magnets are now being fabricated; experiments on the souped-up JT-60SA (for super advanced) are expected to start in 2016.

    Fatherly pride.

    With the Broader Approach on track so far, fusion scientists Pascal Garin (left) and Yoshikazu Okumura have plenty to smile about.

    CREDITS: D. NORMILE/SCIENCE

    Although the Broader Approach infrastructure is well on track, one big challenge remains: attracting talent. “There is no community of accelerator specialists here,” says Garin, who estimates that he needs to hire a few dozen scientists for experiments on the accelerator and a group of 15 to 20 to coordinate and manage IFMIF design activities. All told, the Broader Approach must recruit up to 200 people in Rokkasho, and more for other planned JAEA fusion facilities, Okumura says. The problem is that relocating here is a hard sell. Rokkasho is a cluster of six villages with a population of 11,500 on a peninsula at the northern tip of Japan's main island, Honshu. The snowy, isolated region may be famous for seafood, but it's hard to find good wine or cheese, says Garin, a self-described city lover from southern France who admits that his 3 years so far in Rokkasho have been a big adjustment.

    Others enjoy being far from the madding crowd. “In winter, I go skiing every weekend and hiking and fishing in spring and summer,” says Okumura. Having lived in rural France, accelerator physicist Christophe Vermare says that he and his wife and two young children “like being near the ocean and the countryside.” They also like the attention lavished on children at Rokkasho's new international school, where four teachers oversee just seven pupils. “We have extraordinarily strong support from local authorities to ease our life here,” Garin says. “I have less difficulty bringing Europeans to Rokkasho than Japanese,” he says. That may be due to Japan's academic culture, in which young researchers feel pressure to land permanent positions early in their careers. The Broader Approach is supposed to wind up in 10 years.

    The short horizon raises other issues. “Investing so much money for a few months of experiments is nonsense,” Garin says. The time limit could leave a number of key questions unanswered. For instance, the Broader Approach crew won't be able to run the accelerator long enough to verify its reliability for the long-term continuous use needed for IFMIF. The current agreement also does not call for experiments on the interaction between neutron beams and liquid lithium. Garin plans to propose that the accelerator experiments be continued beyond the expiration of the current agreement in 2017. Likewise, Okumura argues that it would make sense to run the supercomputer and other Rokkasho facilities in support of ITER's experimental program, scheduled to start in 2018 and go for 20 years.

    According to Okumura, Japan and its European partners could easily extend the Broader Approach. The big question is whether there would still be money for fusion research after paying for ITER. Like all parents of a newborn, Okumura and Garin realize it takes more than 10 years—and heaps of cash—to raise a child.

  4. ScienceInsider

    From the Science Policy Blog

    The U.S. Senate rejected a resolution last week that would have blocked the Environmental Protection Agency from regulating carbon dioxide emissions based on its finding that they endanger human health. Opponents say Congress, which is deeply divided on the issue, should have responsibility for regulating greenhouse gases.

    Director Francis Collins says the National Institutes of Health needs to revise its rules designed to prevent conflicts of interest among researchers after The Chronicle of Higher Education reported that a scientist banned by one university from applying for NIH grants for 2 years is free to seek funding after changing jobs.

    UNESCO has shelved a planned ceremony this month to award a prize sponsored by the controversial president of Equatorial Guinea amid growing concern that accepting the gift may tarnish its reputation.

    The University of California (UC) system is threatening to boycott Nature Publishing Group's journals to protest what the university is calling an “exorbitant and unreasonable” increase in subscription fees and what the publisher says is a necessary adjustment after years of “unfair” discounts.

    As part of our team coverage of the oil spill in the Gulf of Mexico (see http://news.sciencemag.org/oilspill), we reported that:

    • A research cruise hopes to study a modern-day analog to violent outflows of methane gas from the sea floor that might have caused the planet's temperature to spike about 55 million years ago. Methane makes up about 40% by mass of what's spewing out of the BP well.

    • Scientists want to divert less water from the Mississippi River to protect Louisiana's wetlands. Increasing the flow through the delta should help keep oil at bay for longer.

    See the full postings and more at news.sciencemag.org/scienceinsider.

  5. Environment

    Endosulfan's Exit: U.S. EPA Pesticide Review Leads to a Ban

    1. Naomi Lubick
    1. Naomi Lubick is a writer in Zurich, Switzerland.
    Exposed.

    A review by EPA found potential health risks for farm workers coming in contact with endosulfan.

    CREDIT: LUIS M. ALVAREZ/AP PHOTO

    After a lengthy scientific review, the United States last week decided to ban the use of endosulfan, an inexpensive organochlorine pesticide that builds up in the environment. The U.S. Environmental Protection Agency (EPA) ruled that the compound—which has a variety of uses from Florida's tomato crops to California's cotton—should be phased out on a schedule to be negotiated with the manufacturer. More than 60 other countries have already opted for a ban; the holdouts—including India and China—argue that the pesticide should continue to be permitted where farmers cannot afford substitutes.

    EPA concluded that endosulfan poses a hazard to both wildlife and humans, citing evidence of fish deaths downstream from treated areas and indications of neurodegenerative impacts in animals, with implications for humans, particularly farm workers. Among recent data cited by EPA is a study published online earlier this year in Ecotoxicology showing that fish at lower trophic levels in the Everglades may retain endosulfan in tissues and pass it on to wading birds that feed on them. (Compounds that collect in tissues and are passed to predators up the food chain are said to “bioaccumulate.”) Previous studies have detected low levels of endosulfan in Arctic animals' tissues, a key indicator of bioaccumulation. Other studies have found traces of endosulfan in human breast milk.

    The current EPA review began after Bayer, endosulfan's patent holder and original maker, received permission to reregister the product in 2002, with a request for updated toxicity data from EPA. Bayer's test results, submitted in 2006, showed evidence of developmental neurotoxicity in rats and toxicity at low levels in the fetuses of pregnant rats; the agency subsequently opened its review for the withdrawal of certain uses. The next year, Bayer stopped selling endosulfan in the United States and pledged to stop selling it internationally by the end of this year. The only remaining U.S.-registered endosulfan maker today is Makhteshim Agan, an Israeli manufacturer with a branch in North Carolina.

    Starting in 2007, EPA gathered public comment on endosulfan's ecotoxicological impacts and on potential economic impacts on U.S. farmers if the pesticide were banned. Last week, EPA announced that it will seek a voluntary phaseout with Makhteshim Agan.

    The EPA decision seems to have already had repercussions abroad. The Australian Pesticides and Veterinary Medicines Authority, which allows some restricted uses of endosulfan, issued a statement last week saying that it does not know of any human health impacts in Australia. But the agency says it is in contact with EPA and other Australian authorities to see if it, too, should take further action on endosulfan.

    Some advocates of a global ban are pushing for controls through the Stockholm Convention, which restricts the use of long-lived compounds. Endosulfan is similar in some ways to the “Dirty Dozen” persistent organic pollutants (POPs)—including chlordane and dieldrin—that were restricted when the convention went into force in 2004. Last year, the convention's scientific advisory group, the POPs Review Committee (POPRC), recommended that endosulfan be considered a POP under the treaty's definition—but they encountered dissent.

    India's representative at last year's annual meeting of POPRC argued that endosulfan is not toxic to humans or the environment at levels currently detected. India also questioned whether Indian users were the source of “long-range transport.”

    Although research shows unquestionably that endosulfan travels far afield, questions do remain over whether the bioaccumulated levels in animals are high enough to be toxic, says Bert Volger of Ceres International LLC, who represented Makhteshim Agan as an observer at the 2009 POPRC meeting. The company disagrees with EPA's scientific assessment but said in a press release last week that, given the high cost of challenging EPA's decision, it will cooperate with the proposed U.S. phaseout.

    When POPRC holds its next meeting in October to consider the socioeconomic impacts of restricting endosulfan use, India is expected to make the argument that a ban would harm poor farmers. (The Indian government owns the country's main producer of endosulfan, Hindustan Insecticides Ltd.) Environmental groups say China, another major endosulfan manufacturer and user, is likely to support India's position.

    EPA considered economic impacts alongside environmental ones when it evaluated a ban in the United States. It concluded that a switch to safer replacements would not harm farmers' bottom lines. Alternatives like the insect-attacking bacterium Bacillus thuringiensis and a family of synthesized plant compounds known as pyrethroids are less toxic to humans. Many farmers in China have already made the switch to these substitutes, according to a report to the Stockholm Convention from two observing environmental activist groups, Pesticide Action Network International and the International POPs Elimination Network (IPEN). Still, China may not be ready to list endosulfan under the Stockholm Convention, according to Joe DiGangi of IPEN.

    DiGangi, who has been monitoring the POPRC conversations, says IPEN has pushed these alternatives for India as well. He argues that EPA's decision could influence policies abroad in a couple of ways. DiGangi says it will add new studies to the toxicity data available to decision-makers, and it could persuade other countries that “a Stockholm Convention listing is on the horizon.” The impact on endosulfan manufacturers such as India will be limited, however, he says: “The U.S. ban will not affect the Indian industry economically,” for example, but it will make the defense of endosulfan “even more difficult.”

  6. ScienceNOW.org

    From Science's Online Daily News Site

    CREDIT: D. H. JANZEN ET AL., PNAS EARLY EDITION (2010)

    False Peepers Frighten Predators Is that a snack or a snake? To small tropical birds foraging on the rainforest floor, those two scowling eyes peering back at them from between the leaves could be a predator. But they could also belong to one of the hundreds of caterpillar species that have evolved eyelike spots and patterns to trick feasting birds. The patterns don't need to be highly detailed to work. Even the mere suggestion of eyes is enough to shoo a bird away, suggesting that the birds are reacting to hard-wired, predator-avoidance instincts, researchers report in the Proceedings of the National Academy of Sciences

    Did a Deep Sea Once Cover Mars? Once upon a time, a deep ocean covered one-third of Mars. Then, billions of years ago, it dried up, leaving the arid, rocky planet we see today. It's a provocative idea, but is it true?

    In a new study, Gaetano Di Achille and Brian Hynek of the University of Colorado, Boulder, considered 52 martian deltas—piles of river-borne sediment—that formed at the level of some body of standing water, as the Mississippi delta is forming at the level of the Gulf of Mexico. One-third of the deltas are at the same elevation ±177 meters (one standard deviation), the pair reports in Nature Geoscience. The simplest explanation, the researchers say, is that all of these deltas formed around the same ocean about 3.5 billion years ago.

    But planetary fluvial geologist Rossman Irwin of the Planetary Science Institute in Tucson, Arizona, is somewhat skeptical, pointing to some deltas and valley networks that lie well below the team's favored sea level, a physical impossibility if an ocean that deep existed at the time they formed. Even so, he thinks it's likely that some sort of northern ocean once existed, even if it's difficult to prove.

    Genetic Map of Autism Comes Into Focus A new study of nearly 1000 people with autism has confirmed that the genetics of the disease are much more idiosyncratic than some had thought. Rather than a few genes that raise the risk of autism throughout the population, scientists are finding dozens of genes that spur disease, many of them in just one or two people.

    Researchers scanned the genomes of 996 children with autism-spectrum disorders, a group of conditions that affect social and communication skills, at high resolution and compared them with the genomes of the children's parents and those of 1287 people without the disease. Just over 5% of those affected had at least one copy-number variant—a deletion or duplication of stretches of DNA that can encompass many genes—that was not present in their parents. And copy-number variants inherited from parents were also much more common in the autistic cohort: Overall, rare copy-number variants were 19% more likely to disrupt genes in autistic children than in controls, the team reports in Nature.

    Deletions of genes had a big effect. Nearly all of these were extremely rare, showing up in, at most, a handful of families. “Most individuals that have autism will have their own rare form,” genetically speaking, concludes senior author Stephen Scherer, a geneticist at the Hospital for Sick Children in Toronto, Canada.

    However, the team found that genes deleted in autistic patients tended to perform similar tasks. Many were involved in aspects of cell proliferation, such as organ formation. A number participated in development of the central nervous system and others in maintaining the cytoskeleton, which protects the cell and helps it move.

    CREDIT: LEEH-IB-USP

    Was the New World Settled Twice? Were the primary ancestors of today's Native Americans really the first people to set foot in the New World? Genetic evidence suggests so, but ancient skeletons tell a different story.

    In an attempt to settle the debate, a team of paleoanthropologists compared the skulls of several dozen Paleoamericans, which date back to the early days of migration 11,000 years ago, with those of more than 300 Amerindians, which date to 1000 years ago. The researchers found clear differences in the shapes and sizes of the Paleoamerican and Amerindian samples. That suggests that more than one group of individuals migrated to the Americas from Asia, the team reports in PLoS ONE. And due to the age of the skeletons, the researchers say, this other group of individuals arrived before the primary ancestors of today's Native Americans.

    The work is “solid” and “perhaps the most sophisticated analysis of craniofacial traits undertaken to date,” says Theodore Schurr, a molecular anthropologist at the University of Pennsylvania. But he warns that the small sample sizes leave the door open to other interpretations.

    Read the full postings, comments, and more at news.sciencemag.org/sciencenow.

  7. Acoustics

    Probing the Secrets of the Finest Fiddles

    1. Adrian Cho

    Violinmakers are taking up the tools of science in a study of Guarneri del Gesù's Vieuxtemps and other great violins.

    Sublime.

    Ilya Kaler plays the Vieuxtemps, crafted by Guarneri del Gesù in 1741.

    CREDIT: EUGENE SCHENKMAN, THE STRAD DIMENSION

    CHICAGO, ILLINOIS—Ilya Kaler, a renowned soloist, gazes admiringly at the 269-year-old violin. He has just played four other great old Italian instruments in an invitation-only recital in the cramped quarters of Bein and Fushi Inc., violin dealers whose shop looks out over Chicago's famous Michigan Avenue. Now Kaler holds the star of the 7 April event, a fiddle named the Vieuxtemps after a previous owner and crafted by Bartolomeo Giuseppe Antonio Guarneri, also known as Guarneri del Gesù, who, along with Antonio Stradivari, is widely considered the best violinmaker ever to have lived.

    “This is one of the greatest instruments in existence, no doubt, one of two, maybe three,” says Kaler, who, with his easy smile and a crimson handkerchief tucked into the breast pocket of his tuxedo, exudes the unassailable confidence of a virtuoso. Then he reconsiders that count. “Three is too many,” he says.

    For sale for $18 million, the Vieuxtemps is a superior instrument, a piece of 18th century art, a potential investment, a legend. It's also the subject of scientific study. With the permission of its owner and dealer Geoffrey Fushi, a team of violinmakers and scientists has put the Vieuxtemps and three of the other violins in the recital through tests including computed tomography (CT) scans and acoustic measurements. They hope to pinpoint traits that distinguish a great violin from a good one or a Stradivarius from a Guarneri.

    Video

    Ilya Kaler plays an excerpt of Wagner's Albumblatt. Video courtesy of Eugene Schenkman, whose documentary, The Strad Dimension, is in development.

    The project also highlights an emerging trend within violinmaking. For decades, the instrument has fascinated a small community of scientists and engineers. Now, even as the scientific experts age and retire, some leading violinmakers are adopting their conceptual tools and experimental methods and taking a decidedly more scientific approach to their centuries-old craft.

    “Violinmaking involves a lot of sitting around and scraping pieces of wood, and if you want to have a life of the mind, you want to be thinking about these [technical] things,” says Joseph Curtin, a violinmaker from Ann Arbor, Michigan, who proposed the study to Fushi. But so far, science cannot tell the best fiddles from the merely good ones, cautions James Woodhouse, an acoustics engineer at the University of Cambridge in the United Kingdom who has studied the violin for 35 years. “We know pretty well how to distinguish a really bad instrument from a really good one. What distinguishes a pretty good instrument from a stratospheric instrument, I think we still don't know.”

    As Kaler plays, however, it's hard to believe that distinction will forever defy scientific analysis. The brilliance of another Guarneri del Gesù, the Sennhauser, slices through as he moves from the introspective andante to the skipping allegro of a sonata by Handel. The darker sound of a violin made by Giovanni Battista Guadagnini in 1752 conjures an image of molasses as the soloist wrings emotion from Schumann's melancholy Romance in A, and the Cathedral Stradivarius sings sweetly as he bows Gluck's soaring Mélodie.

    Finally, the Vieuxtemps rings out almost painfully bright in three pieces including the playful Rondino by 19th century Belgian violinist Henri François Joseph Vieuxtemps, who once owned the fiddle. The violins speak with voices as distinct as people's, and it seems impossible that science cannot quantify the differences.

    Scientists seduced

    The violin has captivated scientists for more than a century. In 1862, German physicist Hermann von Helmholtz cobbled a stroboscope from a tuning fork to decipher the motion of a bowed string. A single kink zips back and forth between the “bridge,” the stanchion on the violin's belly over which the strings run, and the spot where the player pins the string to the fingerboard. The vibrations pass through the bridge and into the violin's body, whose arched top is carved from a slab of spruce and whose back and sides are made of maple. Since the 1930s, researchers have learned much about how the violin converts those vibrations to sound.

    Video

    Ilya Kaler plays an excerpt of Tchaikovsky's Violin Concerto on six famous violins. Video courtesy of Eugene Schenkman, whose documentary, The Strad Dimension, is in development.

    To move a lot of air and sing loudly, a violin must vibrate readily, or “resonate,” like a tuning fork. But whereas a tuning fork has a single soundmaking resonance at a sharply defined frequency, a violin has hundreds of resonances that overlap, enabling it to respond at pitches ranging from 196 hertz (open G) to above 20,000 hertz. Yet it should not ring equally at all pitches, as it would then sound lifeless. “It's not a loudspeaker with a flat-spectrum response,” says George Stoppani, a violinmaker in Manchester, U.K. Instead, the spectrum with which a violin radiates sound resembles a mountain range, with each peak corresponding to a pattern of motion, or “mode,” of the body (see figure, below).

    One way to study the spectrum and the modes is by thwacking the bridge with a small hammer and measuring the violin's output and motion. Around 280 hertz lies the A0 mode, in which air flows in and out of the “f holes” on the violin's belly. The B1− and B1+ modes, identified in the 1980s, lie around 480 hertz and 550 hertz. Around 2500 hertz lies a thicket of modes called the “bridge hill,” long thought to be due to motion of the bridge. Each mode can be complex, as the violin's symmetrical exterior hides its asymmetrical interior. Under one foot of the bridge, a beam called the bass bar reinforces the top; beneath the other foot, a pillar called the sound post connects top and back.

    Much work has focused on the radiation spectrum, which varies among violins a bit like the way fingerprints vary among people. In 1991, German physicist Heinrich Dünnwald reported on the spectra of 700 violins including 53 old Italians, dividing them into six bands that he claimed are associated with qualities of sound. For example, Dünnwald said, a violin that radiates more strongly between 650 hertz and 1300 hertz than in the neighboring bands sounds “nasal,” whereas one that pumps out more energy from 4200 hertz to 6400 hertz than at lower frequencies sounds “harsh.”

    Sonic fingerprint.

    Each peak in a violin's spectrum corresponds to a mode of vibration. Red and blue denote motion in opposite directions.

    CREDITS: GEORGE STOPPANI

    Dünnwald defined a quality index according to which 92.5% of the old Italians ranked as superior, compared with 19% of quality instruments made after 1800. But some researchers question the simple association of frequency bands with sound qualities and the conceptual value of such correlations. “Of course, the old Italians were on top of the list, and that's not that interesting to me because it doesn't tell you how a violin works,” says Gabriel Weinreich, a physicist retired from the University of Michigan, Ann Arbor, who has studied the violin since 1978.

    Scientists continue to probe the violin in ever greater detail, teasing out the interplay of modes and showing, for example, that the bridge hill does not stem from a simple rocking of the bridge. Until recently, however, their work attracted little interest from violinmakers. That's changing, says Woodhouse. “There's a lot of research going on, but it's driven by makers,” he says. “And they're top makers, not marginal makers who everybody laughs at.”

    A revolution in the making

    The meeting of minds hasn't come easy, says Samuel Zygmuntowicz, a violinmaker from New York City who made a violin that sold at auction for $130,000, a record for a living maker. Scientists responded tepidly a few years ago when he presented work at an acoustics meeting at Cambridge, he says. “They were trying to explain to me how to do better measurements, and I was trying to explain the phenomena I thought I was discovering,” he says. “We couldn't get past it.” But Zygmuntowicz boned up on physics and says a meeting in Cambridge last September went much better.

    Working with scientists has given violinmakers access to tools far more sophisticated than the chisels and calipers they traditionally use. As early as the 1980s, researchers put violins through CT x-ray scanners, to trace an instrument's internal structure and look for defects and repairs. High-resolution scans may also provide insight into a long-debated issue of whether the wood in the old instruments differs from that in newer ones, perhaps because it was treated in some way.

    Reporter's Notebook

    Terry Borman, a violinmaker from Fayetteville, Arkansas, and Berend Stoel, a computer scientist working on image processing at Leiden University Medical Center in the Netherlands, took high-resolution scans of three Guarneri del Gesùs, two Stradivariuses, and eight modern violins. They found no difference between the average densities of the wood in the tops of old and new violins. Within the old tops, however, the density varied less between the light grains produced as trees grow during the warmer months and the dark grains produced during colder months, they reported 2 July 2008 in PLoS ONE. “So the old violins were a lot more homogeneous,” Borman says.

    Conversely, teaming with leading makers gives scientists access to rare old instruments. George Bissinger, a physicist at East Carolina University in Greenville, North Carolina, has studied the violin since 1969. But it wasn't until he teamed with Zygmuntowicz in 2006 that he was able to put a Guarneri del Gesù and two Stradivariuses through testing in a project called Strad3D. He and eight violinmakers, a radiologist, and an engineer used a three-dimensional laser vibrometer to map the fiddles' mode, an anechoic chamber to analyze their spectra, and CT scans to probe their guts.

    The study has provided no simple way to separate the good violins from the bad ones, however. For example, the spectra of fine old violins differ from those of poorer new ones only in a few subtle ways, Bissinger says. The old Italians have slightly more uniform spectra and slightly greater damping than 14 lesser violins, Bissinger reported in the Journal of the Acoustical Society of America in September 2008. They also have stronger A0 modes. “That's the one robust difference between a good and a bad violin,” he says.

    The current study suggests that the Vieuxtemps sings in a way that other fiddles do not. In fact, that observation was the impetus for the project. Curtin had heard that Fushi had the fiddle and talked his way into making some measurements. As other analysts do, he uses a tiny automated impact hammer to strike the bridge from the side. But Curtin also tapped it from the top. “I found an acoustical feature that showed up spectacularly well in the Vieuxtemps, in fact more so than with any other violin I've seen,” he says.

    Curtin had pinged other instruments that way and saw signs of a second bridge hill above 3500 hertz. But when its bridge is struck vertically, the Vieuxtemps resonates sharply at 4125 hertz, Curtin explains a few days after the recital. In his quiet studio off a dirt road outside Ann Arbor, he shows several plots—made with software written by Stoppani. The Vieuxtemps's spectrum shows a peak about 5 decibels high that is absent in the spectrum of the Cathedral Strad. The Jarnowich Guarneri del Gesù shows a smaller feature, so the resonance may be common to Guarneri del Gesùs, says Curtin, who wants to trace its origin.

    Spurred by that observation, Curtin talked Fushi into letting him conduct more tests and assembled a team including Borman, Stoel, Stoppani, Weinreich, and Woodhouse. The researchers are still analyzing their data, but one fact is already clear. Most old instruments have been repaired multiple times and patched near the bridge and sound post. A CT scan shows that the Vieuxtemps is pristine (see figures, below), Borman says. So compared with other old Italians, it may sound more like its maker intended.

    What difference?

    But can subtle density variations or spectral features explain the supposedly superior qualities of Strads and Guarneris or distinguish between them? Cambridge's Woodhouse has doubts. Acoustically speaking, researchers can now say why a violin sounds like a violin and not a guitar, he says, but they struggle to make finer distinctions. “If you chose any particular feature, you can probably find two million-dollar violins that differ from each other in that one feature as much as the million-dollar ones differ from a good $10,000 modern one,” Woodhouse says.

    In fact, Woodhouse and Claudia Fritz, a psychoacoustician at Pierre and Marie Curie University in Paris, have studied how big the changes in a violin's spectrum must be before listeners can tell the difference. To do that, Fritz digitized the spectrum of a violin to create a “virtual violin.” When a violinist plays, say, an A, the string vibrates at 440 hertz and multiples of that frequency, or “harmonics.” The violin's spectrum then determines how strongly it radiates at each of those frequencies. With the virtual violin, Fritz can change that spectrum, raising and lowering peaks or shifting their frequencies.

    Not surprisingly, musicians have proved much more sensitive than nonmusicians to small changes in the spectrum, as Fritz and colleagues reported in December 2007 in the Journal of the Acoustical Society of America. When comparing single notes, musicians could tell if the strength of a resonance like the A0 or B1+ changed by about 4 decibels—slightly more than a doubling in intensity. They could tell if the frequency of the resonance was shifted by about 5%, roughly the difference between, say, C and C-sharp. Curtin says the Vieuxtemps's B1+ resonance is about 10% higher in frequency than those of the Strads. Listeners could detect smaller changes in a Dünnwald band.

    Like new.

    Typical patches and repairs show up as light spots in the CT scan of a 1730s violin (right). The Vieuxtemps (center) is spotless.

    CREDITS (LEFT TO RIGHT): EUGENE SCHENKMAN, THE STRAD DIMENSION; BEREND STOEL AND TERRY BORMAN

    The researchers also tested the correlations between subjects' perceptions and the relative strengths of the Dünnwald bands, which are used as guides by makers, and came up with some surprises. For example, some people thought notes with stronger low-frequency components sounded nasal, whereas others thought just the opposite. “When violinmakers are talking about the nasal band, they should be careful,” Fritz says, “because there is no nasal band at all.”

    Such results further complicate an already confusing situation. Yet, collaborations between scientists and violinmakers seem likely to continue, as they offer makers greater insight into their creations, which for centuries they have crafted by copying excellent old instruments. “I think the best way to make a great violin is to understand what makes an instrument great and what's peripheral,” Curtin says. “When we copy, we tend to copy everything.”

    Moreover, the fact that so far tests have identified no obvious difference between great and good violins may actually be telling researchers something. Most studies have been made on violins in pristine isolation, typically suspended by rubber bands from a mount. Of course, when played, a fiddle lies clamped beneath a violinist's jaw, its neck cradled in the musician's hand, its strings worked by the bow. The instrument's defining qualities may show through only in that interaction.

    “I can tell you that the violinist is the big deal,” Bissinger says. “A great violinist can make even a bad violin sound good.” Zygmuntowicz agrees but warns that researchers may struggle to get reliable data on the working violin. “The situations that a violin operates in are really contaminated circumstances for testing,” he says. “Science has shied away from that interaction because it doesn't make good papers yet.”

    To hear Kaler tell it, the violin-violinist interaction is subtle. Asked what distinguishes the Vieuxtemps, he cites its resonance and ease of response. Then he adds, “If a violin responds too easily, it limits the possibility of a performer to produce many colors or to put his or her own imprint on the instrument because the instrument anticipates your desires too much.” So a violin must resist just enough to make the violinist work for what he wants, he says.

    If the Vieuxtemps resists Kaler, it doesn't show. He closes the recital on the Vieuxtemps with Alexander Glazunov's dreamy Meditation, its luminous last note, a seemingly impossibly high D, hanging in the air like twilight. To the scientifically minded, it whispers, “Explain my beauty!” But doing so may be as difficult as grasping the fading reverberations of the music itself.

    Supporting Online Material

    Videos: S1 and S2 www.sciencemag.org/cgi/content/full/328/5985/1468/DC1

  8. Profile: Linda Bartoshuk

    A Taste for Controversy

    1. John Bohannon

    After discovering “supertasters,” Linda Bartoshuk is pushing to change how psychologists evaluate subjective experiences such as taste and pain.

    GAINESVILLE, FLORIDA—After just a glance at her graduate student's notes, Linda Bartoshuk knows that the results of today's experiment will have to be thrown out. The concentration of quinine—a bitter chemical used in this study of taste perception—is one-tenth of what it should be. The student, Adilia Blandon, suddenly realizes her mistake. Blandon had given the quinine to a team of undergraduate assistants to gauge volunteers' sensitivity to different flavors, but with the wrong standard for bitterness, she can't compare these data with previous results. Blandon turns to Bartoshuk with a cringe and groan. Behind her, the doomed experiment continues.

    Taste explorer.

    After unlocking the mystery of taste sensitivity, Linda Bartoshuk is now hunting for the “perfect tomato.”

    CREDIT: J. BOHANNON

    Moments like these test a busy scientist's patience. But without missing a beat, Bartoshuk nods and says, “Don't worry. This is why we call it a pilot study. Now is the time to catch mistakes.” Blandon perks up like a sail catching a fresh breeze and heads back into the lab.

    Bartoshuk, a professor here at the University of Florida (UF), Gainesville, wasn't just being nice. “I tell my students that if you're not making mistakes in science, you're not taking enough risk.” It was an oversight similar to Blandon's that led to Bartoshuk's most famous discovery: supertasters, people with extreme taste sensitivity. But Bartoshuk's research has illuminated more than the human mouth, says Anthony Jack, a psychologist at Case Western Reserve University in Cleveland, Ohio: “She has helped lead the movement to study subjective experience, considered off-limits for a long time.”

    That leadership has paid off in many high-profile publications, election to the National Academy of Sciences, and last year, the presidency of the Association for Psychological Science (APS). Her career hasn't come without controversy, however. The concept of supertasters still ignites debate. And Bartoshuk is making waves again. Her latest passion is nothing short of overturning one of the central methods of her entire field, the subjective scales on which generations of psychologists have built their careers.

    How not to keep a girl out of science

    As a girl born in mostly rural South Dakota in 1938, science was not high on the list of career options for Bartoshuk. But after reading every science-fiction book she could get her hands on, the young Bartoshuk dreamed of astronomy. Her high school had other plans for her. “They forced me to take secretary classes,” she recalls with a wry smile. They did accede to Bartoshuk's request to take trigonometry, physics, and chemistry. “I was the only girl in the class, and I was as surprised as anyone when I got the highest grades.” It helped her win a scholarship to attend Carleton College in Northfield, Minnesota—her family couldn't afford the tuition otherwise—and it was science ever after.

    Bartoshuk says she abandoned astronomy when she learned that “women weren't allowed to use the big telescopes.” She switched to the field that would become the scientific love of her life: psychophysics, the study of how physical stimuli from the environment—sugar on your tongue, vibrations in your ear, heat on your skin—lead to the mysterious phenomenon called subjective experience. It may be a branch of psychology, says Bartoshuk, but “psychophysics has a lot in common with astronomy.” Like the stars in a distant galaxy, the minds of other people are ultimately “untouchable,” she says. The only way to bridge the gap is with rigorous experimental observation and mathematical analysis.

    Already as an undergraduate, Bartoshuk decided to study taste. “The tongue was unexplored territory in sensory research,” she says. As a first-year graduate student at Brown University, she wanted to work with Carl Pfaffmann, one of the leading taste researchers and the first to identify the nerves that send taste signals from the mouth to the brain. She vividly recalls her first conversation with the man who would become her Ph.D. adviser. “Pfaffmann told me point-blank that he didn't want women in his lab,” says Bartoshuk. And why? “They're always crying and washing their hair.”

    Bartoshuk dresses plainly, but she does wear big, bright emotions. When she laughs, which is often, she shakes with it. And when she recalls the troubles with Pfaffmann, the sting is suddenly visible in her face, 5 decades later. Lewis Lipsitt, a psychologist at Brown University, says that Pfaffmann was not an easy man. “[He] could be blunt and he was by nature not an effusive, congratulating kind of person.”

    Bartoshuk's emotions and hair care didn't prevent her from winning over Pfaffmann. One day, Bartoshuk says, she finally became “one of the boys.” An experiment was going badly, with nerve fibers drying out. “I had an idea for a solution, but Pfaffmann was completely dismissive,” she says. So she stormed out and returned with a contraption she'd made out of wire to keep the fibers suspended in mineral oil. It worked. “He told me, ‘I guess you're pretty good at this.’”

    Five years later, when Bartoshuk was settled as a scientist at Yale University, the phone rang. It was Pfaffmann. “I was still mad at him,” she recalls, but he had an astonishing proposal. Calling her from a hospital bed, he wanted Bartoshuk to study him; a viral infection had damaged Pfaffmann's nervous system, knocking out taste sensation from one side of his tongue.

    Rosetta stone.

    Supertasters have far more fungiform papillae, bumps on the tongue that house the taste buds.

    CREDITS: COURTESY OF LINDA BARTOSHUK

    Intrigued by this rare opportunity, for months Bartoshuk conducted experiments on her former mentor. By “painting” taste solutions across his tongue in different directions—either from the “dead” side to the “live” side or vice versa—she was able to test conflicting theories for taste perception. “We proved that the taste-transmitting nerves do not poach across the midline of the tongue,” says Bartoshuk. “We confirmed that taste follows touch paths on the tongue, which wasn't known.” And by tracking the intensity of tastes as Pfaffmann's nerves healed, they discovered “unexpected” aspects of how the nerve signals add up to subjective taste perception. “He turned into one of the best data sets at the time,” says Bartoshuk.

    Then, Pfaffmann suffered a stroke and slowly died. Bartoshuk published a short abstract version of the results and put the data away. “I was too sad to work on it,” she says. But all these years later, some of the work is still new to science. Bartoshuk intends to publish it, with Pfaffmann as lead author, “if people agree that it's ethical.”

    Supertasters

    In 1990, Bartoshuk noticed something strange in her latest study of people's sensitivity to bitterness. Like researchers before her, she observed that people differ in sensitivity to identical solutions of a bitter chemical called PTC. The underlying genetics were well understood. The expression in taste buds of a protein receptor for PTC was required, and you either had it or you didn't. Over the years, she had been using the same test subjects for other taste experiments, and she suddenly realized that “some of the same people who were the most sensitive to bitterness were also the most sensitive to sweetness and sourness.”

    For the most part, sensitivity to what was known as the “basic tastes”—bitter, sweet, sour, salty—were thought to be independent. But what if they weren't? Bartoshuk's subjects were judging the intensity of bitter solutions in relation to a control solution of saline. “I realized that if some people are more sensitive to every taste, salt included, then that is no control at all.”

    To try to get around the problem, Bartoshuk asked her subjects to put the intensity of tastes they experienced on a scale based on a totally different sense. “I used sound,” she says. At the bottom of the scale was silence; at the top was “the loudest sound you have ever heard.” Because people's sensitivity to taste and sound should be independent, this could be a way to identify people with highly tuned tongues.

    A striking pattern emerged from the data. About 25% of people she studied were highly sensitive—as high as triple the average—to every taste. “These are people who live in a different taste world,” says Bartoshuk. “If our tastes are painted in pastel colors, theirs are painted in neon.” She dubbed them “supertasters.”

    The term soon became a household name, and Bartoshuk was inundated with requests for interviews. Bartoshuk didn't mind the attention, but she quickly regretted the term, especially as confusion about the phenomenon spread. “It's not actually true that their taste is super,” she acknowledges. “It's just different.” She says that, for example, vegetables from the Brassicaceae family of plants—cabbage, broccoli, kale—taste bitter to supertasters so they tend to avoid them. On the other hand, they also tend to eat fatty and salty foods sparingly, “so they are less likely to be obese.”

    And what makes someone a supertaster? “It turns out to be simple,” says Bartoshuk, who regrets that she is not one of them. “Supertasters have far more taste buds than the rest of us.” Bartoshuk has administered the test for taste-bud density—counting those bumps in a fixed area of a blue-dyed tongue—to thousands of people. Realizing that taste-bud density determined the intensity range of taste “was like discovering a Rosetta stone for the senses,” she says.

    Not everyone agrees with Bartoshuk that people are born with fixed food preferences, and it seems that most who disagree do so sharply. “People learn to like or dislike bitter foods,” says Tom Baranowski, a psychologist at Baylor College of Medicine in Houston, Texas. “There's no relationship between those preferences and whether or not you're a supertaster.” Baranowski says he had hoped supertaster status would be “a lever” for improving public health. But now he calls it “a waste of time.” Several other researchers have also failed to reproduce correlations between supertaster status and behavioral or health trends.

    Partly because of the media blitz, a scientific “feud” over supertasters may have been inevitable, says Beverly Tepper, a psychologist at Rutgers University in New Brunswick, New Jersey. “Some of [Bartoshuk's] colleagues have felt that she oversold supertasters.” Tepper says that further research has supported the supertaster effects—that Bartoshuk was right after all, she believes—but that “a lot of people have gotten soured to this field because there are a lot of confusing results.” There is even disagreement over how to diagnose someone as a supertaster. The most widespread diagnostic method continues to be high sensitivity to bitterness, frustrating Bartoshuk. “Initially, lots of people accepted that definition,” she says, although taste-bud density turned out to be a more reliable marker. “I think we were sloppy about it.”

    Regardless of the disputes, says Tepper, Bartoshuk “really launched this whole area, and it has helped psychophysics to see individual differences” between people.

    You taste tomato, I taste …

    Bartoshuk pops a bright red wedge into her mouth. “Oh! These are delicious,” she says, munching on one of the different varieties of tomatoes that have just been sliced and distributed into plastic sample cups. In the coming weeks, hundreds of people will eat similar tomato wedges, scoring various aspects of the taste experience. “The goal is to find the perfect tomato,” she says.

    Same pain?

    On the traditional scale (left), the pain reported by a man and woman may end up equal. Bartoshuk's method uses a personalized scale built of each person's experiences from various senses. In this case, the calibrated scale indicates the woman's injury is more painful than the man's.

    CREDITS: (IMAGES) PHOTOS.COM

    Improving people's diet is the aim of this interdisciplinary study. The taste of tomatoes and other fruits “have degraded because of the pressures of the market,” says Harry Klee, a plant biologist at UF who collaborates on the project. As supermarkets have demanded fruit that can withstand shipping and rapidly ripen, taste has been unintentionally bred out of tomatoes. Klee believes that the genes for tastiness, most of which is determined by the dozens of aroma molecules that tomatoes produce, can be put back into supermarket varieties. But the challenge is to nail down what exactly people like about “good” tomatoes.

    Bartoshuk is drawing on a lesson from her supertaster research by reconceiving how to use sensory scales. She has decided that to make sure the subjective taste data from different people can be compared, each subject must build a personalized scale. It is a complex process, beginning with a strange task: “Please identify the strongest sensation of any kind that you have ever experienced.” For most people, says Bartoshuk, the strongest is some kind of pain. Among women, for example, it is usually childbirth. That defines the top of a sensory ladder, and the intensity of various non-taste sensations—loud sounds, bright lights—define the intermediate rungs.

    Bartoshuk says that this method, called the general labeled magnitude scale (above), helps her avoid a serious mistake. “If I want to compare the taste experiences of different people, how do I know they're using the scale the same way?” she says. On a 10-point scale of sweetness, “if you say this tomato is 6, and I also say it's a 6, how do we know it's the same sensation? Your 6 might actually be equivalent to a 3 for me.”

    She first noticed the scaling problem in taste research, but Bartoshuk says that it goes far deeper: “Anytime you want to compare subjective experience across different subjects, you run into the scaling problem.” She says the error casts doubt on decades of psychophysics research, as well as studies in other fields that have misused subjective scales, such as in neuroscience in which subjects report experiences as their brains are mapped. Nor does it end with academia. The same scaling methods are still used to compare subjective experience between potential customers—billions of dollars are spent on market research—and by physicians, particularly for assessing pain.

    It's not just that the traditional subjective scales produce noisy data, says Bartoshuk: “They can produce misleading results.” The worst of these are “reversal artifacts.” Bartoshuk says she encountered one of those with supertasters. “If you use the old 10-point scale, you can make it seem as if supertasters are less sensitive to salt than normal people, which of course is the opposite of reality.”

    One convert is Ann Berger, a clinical pain researcher at the National Institutes of Health Clinical Center in Bethesda, Maryland, who worked with Bartoshuk in the 1990s on oral pain. “I use [Bartoshuk's] scale and it works,” says Berger. “It's most important for assessing chronic pain.” Berger says that data collected from traditional 10-point scales “are meaningless,” and as a result “patients are incorrectly medicated.” The tradeoff with the new scaling method is that “it does take longer to do,” she says, “but it's crucial.” Berger would like to see Bartoshuk's scale adopted as the standard method for pain assessment. The problem, she says, is that “these bad scales were made mandatory by the Joint Commission,” the organization responsible for accrediting health care organizations in the United States. “Now we're stuck with them.”

    In recent years, Bartoshuk has pushed to get the word out on the problem of subjective scaling; it was the sole focus of her plenary lecture at last year's annual APS conference. She has discovered that the issue has a history. R. Duncan Luce, a psychologist at the University of California, Irvine, had described problems with subjective scaling in a 1983 paper in the economic journal Theory and Decision. “The [traditional] 10-point scale is really easy to use, but it's also useless,” says Luce. “A lot of people dismiss this problem as the concern of a few theoreticians. But it is serious.”

    One veteran researcher who dismisses it is Adam Drewnowski, director of the Nutritional Sciences Program at the University of Washington, Seattle. “There is no reversal artifact,” the epidemiologist says. “All these scales work in a similar way and get you approximately similar results.” Nonetheless, Drewnowski says that new computer-aided techniques allow him to avoid numbered scales altogether. “We now use visual analog scales” on which subjects “point and click” relative positions. “It's much faster,” he says, but “you can use any scale.”

    Some agree with Bartoshuk but have varying degrees of optimism that her alternative scaling method will catch on. “There is huge inertia involved in changing a system that seems to work,” says John Prescott, a psychologist at the University of Newcastle in Ourimbah, Australia. “In fact, this success is illusory, based on the fact that scale results are consistent with previous scale results, without any consideration of whether the scale measures what it is supposed to measure.”

    Of course, it could be that Bartoshuk's concern about the subjective scaling error is itself an error. After thinking it over, she lays down a verdict. “It would be wonderful,” she says. “Making conceptual mistakes can be an incredible window into new insights.”

  9. Cancer Research

    Childhood's Cures Haunted by Adulthood's 'Late Effects'

    1. Jenny Marder
    1. Jenny Marder is a writer in Washington, D.C.

    As the number of cancer survivors climbs into the many millions, researchers are trying to learn why some acquire treatment-related diseases and others do not.

    Survivor.

    Debbie Motts, cured of Hodgkin's lymphoma at 18, learned at 47 that the treatment likely caused breast cancer.

    CREDIT: COURTESY OF DEBBIE MOTTS

    Every weekday for nearly 3 months, 13-year-old Debbie Motts pressed a Tic Tac mint to her nose as she made her way through the lobby, down the elevator, and along the halls to the radiation clinic for her Hodgkin's lymphoma treatment. She hated the smell of the hospital; the Tic Tac helped. Now, nearly 40 years later, she hates the smell of Tic Tacs. They conjure up memories of her treatment: the sick days, the hair loss, the constant metallic taste in her throat.

    At 18, Motts was declared cancer-free. She regained a normal life, got married, and had two children. But in 2007, a routine mammogram led to the detection of cancer cells in her right breast. Doctors said it was most likely a late-emerging side effect of the radiation she received as a child. The cure, they believed, had caused the cancer.

    There are 11.4 million cancer survivors living in the United States, according to the National Cancer Institute, a cohort that has tripled over the past 3 decades. Some, like Motts, were treated as children or teenagers; almost 80% of children treated for cancer today live at least 5 years after diagnosis.

    But roughly 40% of these survivors will develop life-threatening health problems within 30 years of their initial cancer diagnosis, according to a 2006 study published in The New England Journal of Medicine. The list of cancer therapy's late effects is long and troubling. It includes not just second cancers but strokes, bone damage, and obesity. Lungs can scar and stiffen, making it hard to breathe. Heart muscles can weaken and become flabby, unable to pump blood. Not everyone develops problems, however; 25% remain healthy.

    As the number of survivors has grown, so has the field of late-effects research. Several U.S. hospitals have opened centers devoted exclusively to cancer survivors. St. Jude Children's Research Hospital has amassed a rich database on adult survivors of pediatric cancer called the Childhood Cancer Survivorship Study, launched in 1993.

    Vulnerabilities.

    Researchers have linked a growing number of genes to a higher risk of harm from cancer therapy.

    CREDIT: (TABLE SOURCE) SMITA BHATIA AND MARY RELLING/ST. JUDE CHILDREN'S RESEARCH HOSPITAL

    But many questions remain. Few childhood cancer survivors have been followed for more than 30 years. Little research has been done on cancer survivors diagnosed as adults. Also unknown are the molecular mechanisms that cause many late effects.

    A small number of researchers have turned to genetics to help untangle the problem. Recent work on chemotherapy, for example, has identified a gene variant linked to heart problems that appear long after anthracycline drug treatment. Other scientists interested in late effects recently formed an international group to pool data on gene variants that increase the risks for patients who carry them. Their goal is to develop a stronger grasp on the biology behind these late effects, identify the cancer patients who are predisposed, and tailor their treatments accordingly.

    Subtle toxicity

    Smita Bhatia keeps a tidy office at California's City of Hope cancer center in the foothills of the San Gabriel Mountains, with a “Cancer-Free Kids” sticker pasted to a filing cabinet and pictures of her own grown daughters on the shelves. A pediatric oncologist and chief of population genetics, she has sifted through hundreds of genes for those that predict certain late effects.

    In 2004, Bhatia decided to concentrate on the carbonyl reductase (CBR) gene. It encodes a drug-metabolizing enzyme that breaks down anthracyclines, considered the backbone of childhood cancer treatment and used to treat about one-half of all cancer patients. Anthracyclines are usually administered intravenously and disrupt the DNA of rapidly dividing cancer cells. Nearly one-third of patients who receive a moderate dose develop congestive heart failure later on, a condition in which the heart no longer pumps enough blood. Most who develop the disease die from it.

    Doctors have known about the problem for years now but haven't understood until recently the irregular pattern of harm. “Some patients who have congestive heart failure have received a very low dose” of anthracycline, Bhatia explains. “And others have received high, high doses and have escaped congestive heart failure.” That led her to look for susceptibility linked to a metabolic response.

    The CBR protein, which is expressed in the liver and brain, as well as the heart, breaks anthracyclines down into an alcohol metabolite. Lab studies on human heart cells have shown that high-risk forms of two variants of the gene—known as CBR1 and CBR3—metabolize the drug more efficiently, causing larger-than-normal amounts of metabolites to collect. In the heart, studies have shown that the cells die off when these metabolites build up to a high level, doing permanent damage to the muscle.

    After a pilot study in 2008 indicated that anthracycline-treated children with a variant of the CBR3 gene had a higher risk for heart problems, Bhatia and her group scaled up their research. They matched 165 patients who had been treated with anthracyclines as children and had later developed heart failure with 323 controls: childhood cancer patients with identical treatment but no heart failure. Javier Blanco, a pharmaceutical scientist at the University at Buffalo in New York, genotyped the blood and saliva samples.

    When Bhatia's group crunched the numbers back at the City of Hope, they found confirmation of the pilot study. Patients with the high-risk variant of the CBR1 or CBR3 gene who were exposed to 250 mg or less of anthracyclines, a common dose for cancers like Hodgkin's lymphoma and acute lymphoblastic leukemia, were 4.8 times more likely to develop congestive heart failure. (Patients treated with a higher dose were at high risk regardless of their CBR status.) Blanco, who presented the study at the American Society of Clinical Oncology's 2010 annual meeting on 7 June, cautions that the results still need to be replicated. The next step will be to apply the findings to patients at risk. Bhatia hopes to start a clinical trial within 2 years that would use different chemotherapy treatments or a “cardioprotectant” drug.

    Daniel Mulrooney, a pediatric oncologist at the University of Minnesota, Twin Cities, and leader in research linking low doses of radiation and chemotherapy to an increased risk of heart disease in survivors, calls the CBR study fascinating. “So much of cancer survivorship has been describing the epidemiology,” he says. “What this does is it helps us understand why these things happen. And moving from describing late effects to understanding the mechanism of why these things happen is crucial to treating them.”

    Expanding the search

    The CBR finding is one of a handful to identify genetic risk factors for late effects. Variants in the COMT and TPMT genes, which affect metabolism, have been associated with hearing loss among cancer survivors treated with cisplatin, a platinum-based chemotherapy drug. The leptin-receptor gene has been linked to obesity among female survivors of acute lymphoblastic leukemia.

    Wider net.

    Smita Bhatia and Barry Rosenstein are involved in an international search for the causes of therapy-related late effects.

    CREDITS (TOP TO BOTTOM): PHILIP CHANNING/CITY OF HOPE; @ 2010 MOUNT SINAI MEDICAL CENTER

    Some researchers have devoted their careers to the subject and unearthed surprisingly little, partly because the cost of genome sequencing is high. But as costs drop, it will become increasingly easy for researchers to study multiple genes, says Joseph Neglia, co-leader of the Cancer Outcomes and Survivorship Research Program at the University of Minnesota's Masonic Cancer Center and also an author of the CBR study.

    Studying multiple genes is essential to understanding the pathology of a disease, says Neglia: “The concept that there's a single gene that's the master controller for a single drug is just fundamentally wrong.” Bhatia agrees. She thinks the future of this research lies in probing groups of carefully selected candidate genes believed to be involved in late-effects pathways. The CBR story, although exciting, is only part of a bigger genetic picture, she says.

    To help fill out that picture, Bhatia is heading up the largest ever multi-institutional, case-control study on late effects of chemotherapy and radiation and studying genes among 1175 childhood cancer survivors who developed heart disease, stroke, bone damage, or a second cancer after their initial treatment.

    At the Mount Sinai Medical Center in New York City, Barry Rosenstein, a radiation biologist, has helped to organize another search. Rosenstein has studied genetic predictors for radiation-induced late effects for 12 years. (He calls it “radiogenomics.”) He published nearly 20 findings on candidate genes that appeared to be linked to disease, including a 2006 paper on a genetic marker for skin fibrosis among breast cancer patients. But many of his studies had limitations: Results were not consistently validated, or relative risks were too low to be useful in the clinic. The skin fibrosis finding, for example, was replicated in some studies but not others. “We came to a bit of a dead end,” he says: “Nothing was good enough to serve as a predictive assay.”

    He has shifted his focus to genomewide studies, teaming with Catharine West, a professor of radiation biology at the University of Manchester in the United Kingdom, to create an international “Radiogenomics Consortium” of biologists, oncologists, epidemiologists, and geneticists. He says, “It's kind of an agnostic approach. We're going to give every gene a chance.”

    In the clinic, considering late effects during the initial cancer treatment can be difficult. “We can't sacrifice successful treatment of the primary cancer,” Neglia says. “But it's a tough balance. I'm sitting here, looking at a picture on my desk of a teenager who was treated for Hodgkin's disease who may be looking at a hip replacement … because of steroid side effects. And she's 18. We appear to have cured Hodgkin's disease, but there's a cost.”

    It is a Faustian bargain: winning freedom from one cancer in youth in exchange for the specter of a different disease in adulthood. The pervasiveness of bad outcomes is what “makes us so committed to trying to understand what happened, so we can prevent it in the future,” Bhatia says. This future, researchers hope, will include increasingly personalized cancer treatments designed to minimize the risks of late effects or arrest them at their earliest stage. It's about restoring what cure really means, says Neglia: “getting the child back to who they were before the diagnosis of cancer.”

  10. Marine Biogeochemistry

    The Invisible Hand Behind A Vast Carbon Reservoir

    1. Richard Stone

    A key element of the carbon cycle is the microbial conversion of dissolved organic carbon into inedible forms. Can it also serve to sequester CO2?

    XIAMEN, CHINA—For simple sea creatures, dissolved organic carbon (DOC) is the staff of life. Much of it, however, is as unpalatable as chaff and accumulates in the water column. Scientists are unraveling how organic matter in the marine food chain is converted into forms that less readily relinquish carbon in the form of carbon dioxide (CO2). “The existence of this ‘inedible’ organic carbon in the ocean has been known for quite some time. But its role in the global carbon cycle has been recognized only recently,” says Michal Koblizek, a microbiologist at the Institute of Microbiology in Trebon, Czech Republic.

    New findings are unmasking the invisible processes that suspend immense amounts of carbon just below the ocean waves. “It's really huge. It's comparable to all the carbon dioxide in the air,” says Jiao Nianzhi, a microbial ecologist here at Xiamen University. He and others are exploring the tantalizing prospect of sequestering CO2 in this reservoir. It's too early to say whether the vast pool will respond to geoengineering, says Dennis Hansell, a marine biogeochemist at the University of Miami in Florida. However, he says, “I expect the light to come on over heads and we'll experience an ‘ah ha!’ moment.”

    Data from several research cruises have yielded a broad-brush view of what Jiao has dubbed the microbial carbon pump (MCP): the microbe-driven conversion of bioavailable organic carbon into difficult-to-digest forms known as refractory DOC. This summer, the European Project on Ocean Acidification is carrying out a slate of experiments in Arctic waters that includes probing the MCP. Then in October, Jiao's team heads to the opposite thermal extreme: They will explore the mechanisms of the MCP and CO2 sequestration in the equatorial Indo-Pacific Warm Pool, the warmest marine waters in the world. The MCP will also be featured next month at a Gordon Research Conference on marine microbes, and it is outlined in a paper in press at Nature Reviews Microbiology. The concept “could revolutionize our view of carbon sequestration,” says Markus Weinbauer, a microbial oceanographer at Laboratoire d'Océanographie de Villefranche in France.

    The ocean surface is like a planet-sized set of lungs that inhale and exhale CO2. As a global average, the oceans take up about 2% more of the gas than they release. Some CO2 dissolves into the water column, forming carbonic acid. As atmospheric CO2 levels rise, ocean pH decreases, a phenomenon called acidification that could endanger corals and other creatures by slowing the growth of carbonate skeletons (see p. 1500). Carbon also enters the seas through the food web: During photosynthesis, phytoplankton fixes CO2 to organic carbon—as much as 60 gigatons of carbon per year, roughly the same amount fixed on land. “The carbon is not captured for long,” says Koblizek. Most new marine biomass is consumed in days and returned to the air as CO2. Some, however, ends up in the deep ocean sink, when remains of dead organisms fall to the sea floor. Each year, this biological pump deposits roughly 300 million tons of carbon in the seabed.

    DOC doc.

    Jiao Nianzhi formulated the MCP concept based on his studies of AAPB, an unusual kind of photosynthetic bacteria (left).

    CREDITS: COURTESY OF JIAO NIANZHI

    Even more massive amounts of carbon are suspended in the water column as DOC. The oceans hold an estimated 700 billion tons of carbon as DOC—more than all land biomass put together (600 billion tons of carbon) and nearly as much as all the CO2 in the air (750 billion tons of carbon). About 95% of organic carbon is bound up as refractory DOC: “the largest pool of organic matter in the ocean,” says Farooq Azam, a microbiologist at Scripps Institution of Oceanography in San Diego, California. In the December 2009 issue of Oceanography, a team led by Hansell and Craig Carlson of the University of California, Santa Barbara, compiled the first global map of DOC distribution. Carbon-14 studies suggest that refractory compounds swirl in this microbial eddy for more than 6000 years, several times the circulation time of the ocean.

    The realization that refractory DOC is a key element in the global carbon cycle has lit a fire under efforts to figure out what the stuff is and where it comes from. Researchers now know that refractory DOC consists of thousands of compounds, such as complex polysaccharides and humic acids. A team led by Xosé Antón Álvarez Salgado of the Instituto de Investigaciones Marinas in Vigo, Spain, has tracked the conversion of some forms of bioavailable carbon to refractory carbon by observing changes in their optical properties: Humic substances absorb UV light and re-emit it as blue fluorescence at specific wavelengths.

    The origins of most refractory DOC are a black box. Some is produced when light degrades organic matter near the ocean surface. Oil seeps contribute to the pool. “The oil spill in the Gulf of Mexico is just one drastic example of how this material is released into the ocean,” says Meinhard Simon, a microbial oceanographer at the University of Oldenburg in Germany. Other compounds are likely forged in underwater vents or in wildfires and swept into the sea. For the most part, however, says Azam, “we lack understanding of the mechanisms of its formation or variations in its magnitude and composition.”

    Azam and others credit Jiao with a key insight: the recognition that microbes play a dominant role in “pumping” bioavailable carbon into a pool of relatively inert compounds. Some refractory DOC hangs in the upper water column, while some gets shunted to the deep ocean interior via the biological pump. The MCP “may act as one of the conveyor belts that transport and store carbon in the deep oceans,” says Chen-Tung “Arthur” Chen, an ocean carbonate chemist at National Sun Yat-sen University in Kaohsiung, Taiwan. The MCP also appears to function in deep waters, where bacteria adapted to the high-pressure environment may have “a special capacity” to degrade refractory DOC, says Christian Tamburini, a microbiologist at the Centre d'Océanologie de Marseille in France.

    It took sharp sleuthing to uncover the microbial connection with refractory DOC. In a landmark paper in 2001, Hiroshi Ogawa of the University of Tokyo and colleagues showed that marine microbes are able to convert bioavailable DOC to refractory DOC (Science, 4 May 2001, p. 917). Then a month later, Zbigniew Kolber, now at the Monterey Bay Aquarium Research Institute in Moss Landing, California, and colleagues reported that in the upper open ocean, an unusual class of photosynthetic bacteria called AAPB accounts for 11% of the total microbial community (Science, 29 June 2001, p. 2492). AAPBs seemed to be plentiful everywhere, according to measurements of infrared fluorescence from the microbe's light-absorbing pigments.

    It turned out, though, that other organisms were throwing the AAPB estimates way off the mark. Using a new technique, Jiao's group determined that the fluorescent glow of phytoplankton was masking the glow of the target microbes. “Just like when the moon is bright, less stars are visible,” Jiao says. He put the new approach through its paces in 2005, when China's Ocean 1 research vessel conducted campaigns to mark the 600th anniversary of Admiral He Zheng's historic voyages. The observations “turned things upside down,” Jiao says. His group found that AAPBs are more abundant in nutrient-rich waters than in the open ocean, indicating that AAPB population levels are linked with DOC, not light.

    Next, Jiao found that AAPBs are prone to viral infection, and he isolated the first phage that's specific for these bacteria. Phages rip apart their hosts, spilling their guts, including organic carbon, into the water. This viral shunt acting on many marine bacteria “may be a significant player in the accumulation of refractory DOC compounds” in the water column, says Steven Wilhelm, a microbiologist at the University of Tennessee, Knoxville. Pulling together several strands—the ubiquity of AAPBs, their low abundance but high turnover rate, the tight link to DOC, and their susceptibility to infection—Jiao proposed that AAPBs and other microbes are a key mechanism for the conversion of bioavailable DOC to refractory DOC. That may seem counterintuitive, as microbes do not set out to produce refractory DOC; rather, the compounds are a byproduct of their demise. “This process is not beneficial to the cell,” says Simon.

    Double-barrel pump.

    Each year, the biological pump deposits some 300 million tons of carbon in the deep ocean sink. Even more massive amounts are suspended in the water column as dissolved organic carbon, much of which is converted into refractory forms by the microbial carbon pump.

    CREDIT: C. BICKEL/SCIENCE

    Because the buildup of refractory DOC in the water column is accidental, it will be a challenge to coax microbes to sequester more carbon. For decades, researchers have been tinkering with the biological pump to store more carbon in the deep ocean by seeding seas with iron fertilizer. The iron triggers phytoplankton blooms that suck more CO2 from the air. That should also drive more carbon into the refractory pool, Koblizek says.

    Even tweaking the MCP could have a profound effect. The water column holds on average 35 to 40 micromoles of carbon from refractory DOC per liter. An increase of a mere 2 to 3 micromoles per liter would sock away several billion tons of carbon, says Nagappa Ramaiah, a marine microbial ecologist at the National Institute of Oceanography in Goa, India. “We have to investigate any and all means to help sink the excess carbon,” he says.

    Two billion years ago, when bacteria ruled Earth, the oceans held 500 times as much DOC as today, most likely generated by the MCP, Jiao says. Ecosystem dynamics have changed immensely since then, but the microbial sequestration potential could still be huge, he argues. No chemical equilibrium would limit conversion of bioavailable DOC to refractory DOC, which in turn would not exacerbate ocean acidification, says Jiao, who is planning pilot experiments this summer. Ramaiah, meanwhile, says he is looking for enhanced sequestration potential in select marine bacteria strains.

    There's no simple recipe—and some scientists are not convinced that it's feasible or even safe. “I do not think it is possible to enhance carbon sequestration by the MCP. We have no handle on any controls” of how refractory DOC is generated, says Simon. With the present knowledge, any sequestration effort, argues Weinbauer, “could come back like a boomerang and worsen the problem.” At the same time, humans may already be “inadvertently stimulating the MCP,” says Salgado. Global warming is increasing stratification, reducing deep convection, and stimulating microbial respiration—all of which favor the MCP, he says.

    The MCP concept should help address critical issues, such as whether ocean acidification and warming will significantly alter carbon flux into refractory DOC, says Azam, who with Jiao chairs the Scientific Committee on Oceanic Research's new working group on the role of MCP in carbon biogeochemistry. The upcoming research cruises should fill in more details of how the MCP governs carbon cycling and how it may respond to climate change. As Wilhelm notes, “We are just at the dawn of developing this understanding.”

  11. News

    Ocean Acidification Unprecedented, Unsettling

    1. Richard A. Kerr

    Humans are caught up in a grand planetary experiment of lowering the ocean's pH, with a potentially devastating toll on marine life.

    Aside from the dinosaur-killing asteroid impact, the world has probably never seen the likes of what's brewing in today's oceans. By spewing carbon dioxide from smokestacks and tailpipes at a gigatons-per-year pace, humans are conducting a grand geophysical experiment, not just on climate but on the oceans as well.

    First victims?

    Corals appear to be particularly vulnerable to falling pH caused by rising carbon dioxide.

    CREDIT: PHOTOS.COM

    Over the past 4 years, there's been a crescendo of concern that the ocean experiment may be scarier than its climate counter part (http://news.sciencemag.org/sciencenow/2006/07/05-01.html). Now the geochemists are weighing in, and they are not mincing words: The physics and chemistry of adding an acid to the ocean are so well understood, so inexorable, that there cannot be an iota of doubt—gigatons of acid are lowering the pH of the world ocean, humans are totally responsible, and the more carbon dioxide we emit, the worse it's going to get. Unconstrained emissions growth is likely to leave the current era of human planetary dominance “as one of the most notable, if not cataclysmic, events in the history of our planet,” geochemist Lee Kump of Pennsylvania State University, University Park, and colleagues wrote last December in a special issue of Oceanography. The geochemical disruption will reverberate for tens of thousands of years.

    It's less clear how marine life will fare. “We can detect these changes [in ocean acidity], but we still don't have a good idea of how ecosystems would change,” says marine biologist Victoria Fabry of California State University, San Marcos. With nothing in the geologic record as severe as the ongoing plunge in ocean pH, paleontologists can't say for sure how organisms that build carbonate shells or skeletons will react. In the laboratory, corals always do poorly. The lab responses of other organisms are mixed (http://news.sciencemag.org/sciencenow/2009/12/01-01.html). In the field, researchers see signs that coral growth does slow, oyster larvae suffer, and plankton with calcareous skeletons lose mass. There are enough alarming signs that global oceanic acidification “is an experiment we would not choose to do,” says Fabry.

    Blue, blue, blue.

    Measurements to 1000 meters deep across the North Pacific revealed that in 15 years carbon dioxide emissions drove down pH (blues) in all surface waters and as deeply as 550 meters.

    CREDIT: R. H. BYRNE ET AL., GEOPHYSICAL RESEARCH LETTERS 37 © 2010 AMERICAN GEOPHYSICAL UNION

    Nothing like it

    Strictly speaking, the ocean, now at a pH of 8.1, will not turn into an acid, as its pH will not drop below 7.0. But on dissolving into the ocean, carbon dioxide instantly forms bicarbonate ions (HCO3) and hydrogen ions—the H+ of pH. The “acidification” resulting from the current carbon dioxide emissions is massive and rapid, a combination that is “almost certainly unprecedented in Earth history,” says earth systems modeler Andrew Ridgwell of the University of Bristol, United Kingdom.

    The closest analog in the geologic record to the present acidification appears to be the Paleocene-Eocene Thermal Maximum (PETM) 55.8 million years ago. At its start, anywhere from 2000 to 7000 gigatons of carbon were released as methane and carbon dioxide, the methane quickly oxidizing to carbon dioxide. Where it all came from—volcanoes, icy sea-floor methane hydrates, marshy peat, or a combination—no one is sure, but almost all of it would eventually have gone into the ocean. PETM's carbon gush was on a par with what burning the 2180 gigatons of carbon in the world's fossil fuel reserves would produce, notes Kump and his colleagues.

    The difference this time around is speed. Today, “you could argue the rate of release is 10 times faster [than at the PETM], if not faster,” says paleoceanographer James Zachos of the University of California, Santa Cruz. Whereas nature took a few thousand years to spout out thousands of gigatons of carbon, he notes, humans could be doing it in a few centuries.

    And speed makes a big difference. It takes the ocean about 1000 years to flush carbon dioxide added to surface waters into the deep sea where sediments can eventually neutralize the added acid. The PETM release appears to have been slow enough that no biological catastrophe struck in the upper ocean, only an extinction among tiny shell-forming organisms living on the deep sea floor. But today's emissions are so rapid that they are piling up in surface waters.

    And the acid flows

    The latest evidence of raging acidification of surface waters comes in the first direct, basin wide observation of plunging pH. Marine chemist Robert Byrne of the University of South Florida in St. Petersburg and colleagues reported 20 January in Geophysical Research Letters that the pH of surface waters along a line running 3200 kilo meters north from near the island of Hawaii fell between 1991 and 2006 (see figure, p. 1500). The pH decline attributable to human activities over the 15 years was 0.026 pH unit, a drop Byrne calls “startling” in its rapidity. Overall, researchers estimate there has been a 0.1-pH-unit decline for the global ocean since industrialization began a couple of centuries ago. In logarithmic pH units, the change may seem tiny, but in absolute terms, that translates into a 30% increase in surface-ocean acidity.

    Now ocean pH is lower than it's been for 20 million years, and it's going to get lower, says marine chemist Richard Feely of the National Oceanic and Atmospheric Administration's (NOAA's) Marine Environmental Laboratory in Seattle, Washington. He and his colleagues have modeled future pH based on what he calls the irrefutable chemistry of acidification. The model assumes a business-as-usual growth in carbon dioxide emissions. As they report in the same Oceanography issue, the modeling predicts a drop from a pre-industrial pH of 8.2 to about 7.8 by the end of this century. That would increase the surface ocean's acidity by about 150% on average.

    Living with acid

    The future of marine life in an acidifying ocean is far less clear than the chemistry of acidification but nonetheless looks bleak for many organisms. Falling pH has two effects on species that build shells or skeletons of calcium carbonate. These organisms include tropical corals, echinoderms, mollusks, microscopic foraminifera floating in surface waters, and certain algae. When the hydrogen ion concentration of seawater gets high enough, the calcium carbonate in these organisms begins to dissolve.

    Colder waters with a greater capacity for carbon dioxide will be affected first. Feely's modeling projects that by midcentury, all Arctic waters will corrode the most vulnerable crystal form of calcium carbonate, called aragonite. By the end of the century, all of the Southern Ocean and parts of the North Pacific will be corrosive to sea snails called pteropods and other aragonitic organisms.

    Going, going, …

    In seawater of the pH that may prevail by the century's end, the shell of a pteropod dissolves in a matter of weeks (top to bottom).

    CREDIT: © DAVID LIITTSCHWAGER/NATIONAL GEOGRAPHIC STOCK

    The other effect of falling pH is already at work. As hydrogen ion concentrations go up, more and more of the ocean's carbonate ions—the building block of all carbonate shells and skeletons—combine with hydrogen ions to form bicarbonate, driving down the concentration of the essential carbonate. Organisms have a harder time extracting the carbonate they need from the surrounding water.

    In a compilation of controlled acidification studies, marine chemist Scott Doney of the Woods Hole Oceanographic Institution in Massachusetts and his colleagues found that all 11 species of tropical coral studied under falling pH slowed their aragonite production. Among noncoral calcifiers, most also slowed their carbonate building, though a few, such as certain coralline red algae and echinoderms, increased it.

    So far, field observations tend to support the deleterious effects of falling pH. In the 2 January 2009 issue of Science (p. 116), marine scientists Glenn De'ath, Janice Lough, and Katharina Fabricius of the Australian Institute of Marine Science in Townsville reported on their broad survey of coral across the Great Barrier Reef of Australia. Reading the rate of growth recorded in coral skeletons, the group found that calcification across the Great Barrier Reef had declined 14.2% since 1990. And they found no sign that such a “severe and sudden decline” had occurred in the past 400 years. Although the group could not pin down what caused the slower growth, they pointed to a rise in ocean temperatures combined with declining pH.

    Planktonic foraminifera also seem to be suffering in this lower pH environment. Paleoceanographer Andrew Moy and his colleagues at the Antarctic Climate and Ecosystems Cooperative Research Centre in Hobart, Australia, found that the shells of one type of foram growing in today's Southern Ocean are 30% lighter than those of the same species from the past few thousand years. In a paper published online 8 March 2009 in Nature Geoscience, they point to acidification as the cause because they find a correlation between higher atmospheric carbon dioxide and lower shell weight in a 50,000-year-long Southern Ocean sediment record.

    Curtailed shell growth may be fatal for some organisms. Water naturally low in pH wells up along the coast of Oregon and sometimes floods into Netarts Bay, from which the Whiskey Creek Shellfish Hatchery in Tillamook draws its water. Alan Barton, now of Bear Creek Shellfish Hatchery in North Carolina; Sue Cudd of Whiskey Creek Shellfish Hatchery in Tillamook, Oregon; and chemical oceanographer Burke Hales of Oregon State University, Corvallis, found a strong correlation between corrosively high concentrations of carbon dioxide in hatchery water and mass mortality of oyster larvae forming their first partially aragonitic shells. “We're getting a window into the future of what the open ocean will be like in 100 years,” says Hales.

    In April, the National Research Council (NRC) pointed out in a report that getting a clearer view through that window will take more time and money, which governments are starting to spend. For the European Project on Ocean Acidification, a 27-institute research consortium is expanding the monitoring of ongoing acidification and examining biological effects. The 2009 Federal Ocean Acidification Research and Monitoring Act got interagency coordination going in the United States, and $5.5 million in NOAA's fiscal year 2010 budget has boosted research in that agency. But the NRC report also concluded that “development of a National Ocean Acidification Program will be a complex undertaking.” They got that right.

  12. News

    A Push for Quieter Ships

    1. David Malakoff
    1. David Malakoff is a writer in Alexandria, Virginia.

    Although sonar and air guns have grabbed headlines, researchers say the cacophony from ships creates far more ocean noise.

    A shifting soundscape.

    Increased shipping has made the ocean noisier, potentially disrupting communication among whales and other marine life.

    CREDIT: JOHN CALAMBOKIDIS/CASCADIA RESEARCH, WWW.CASCADIARESEARCH.ORG

    From a drifting boat, the ocean off Massachusetts can seem like one of the quietest places on Earth. But Leila Hatch isn't fooled. Over the past 4 years, the marine ecologist has helped lead studies at the Gerry E. Studds Stellwagen Bank National Marine Sanctuary that are documenting how human activities are ramping up the region's undersea noise—and highlighting just how difficult it may be to turn down the volume.

    “We're getting a much better understanding of how much sound is in the sanctuary and where it's coming from,” says Hatch, who works for the U.S. National Oceanic and Atmospheric Administration. Sophisticated acoustic sensors, for instance, have enabled researchers to assemble a detailed “noise budget” for the 2200-square-kilometer preserve. It includes the musical calls of whales and fish and the rumbles created by wind and rain. But it also documents the mechanical thrashing of giant ships steaming into nearby Boston Harbor.

    Those ships “can actually double the noise levels in some parts of the sanctuary,” notes Hatch, swamping the low-frequency wavelengths that whales and other sea creatures use to communicate, find mates, and navigate their watery world. Researchers worry that the cacophony is making it even harder for these creatures to overcome the numerous human threats—from toxic pollution to overexploitation—that have already pushed some to the edge of extinction.

    On the same frequency.

    Large ships produce sounds in the same bandwidths that fish and some marine mammals use (right). The acoustic clutter can be intense in heavily traveled sea lanes (far right) of the Boston Harbor channel as it passes through the Stellwagen marine sanctuary off Massachusetts.

    CREDIT: ADAPTED FROM B. SOUTHALL/NMFS/NOAA, JUPITERIMAGES; ADAPTED FROM NOAA/STELLWAGEN BANK NATIONAL MARINE SANCTUARY

    Stellwagen's monitoring program, one of the world's most intensive, has implications well beyond the marine sanctuary. It has helped focus attention on the growing acoustic clutter created by the world's nearly 100,000 large commercial ships. Although other sources of ocean noise—including military sonar, pile drivers, and the undersea air guns that scientists and energy companies use to map the sea floor—have generated more controversy because of the risks to sea life (Science, 11 January 2008, p. 147), researchers say everyday ship traffic is arguably the sea's most pressing sound problem. Over the past 50 years, the growing trade fleet has contributed to a 32-fold increase in low-frequency noise in some parts of the ocean; that's a doubling of the din every decade. “Shipping may be responsible for 90% of the sound energy we add to the ocean, but it seems like we spend most of our time talking about the other 5 or 10% of the problem,” says Brandon Southall, a former leader of U.S. government efforts to study and regulate ocean noise and now a consultant with SEA Inc. in Santa Cruz, California. In part, that's because ships typically aren't covered by the world's few laws that deal with ocean noise.

    The conversation, however, is shifting. Recently, a group of shipping industry officials, scientists, and government regulators began to examine strategies for slashing low-frequency shipping noise. Engineers say such a reduction is technologically feasible, but the costs—and opposition from some shipping companies—could be formidable. Still, “shipping noise is an issue that is getting more attention,” says acoustician Arthur Popper of the University of Maryland, College Park, who studies the effects of sound on fish and is organizing a major international conference* on ocean noise set for Ireland in August.

    Sound pollution

    The Stellwagen sanctuary's innovative monitoring effort is sure to get attention at the meeting. Almost from the moment the U.S. government preserve was created in 1992, its managers have moved to understand how ocean noise might be influencing Stellwagen's animals, particularly endangered humpback and right whales. In 1996, researchers began their first efforts to measure noise levels and by 2004 had dotted the sanctuary's waters with acoustic buoys and sea floor–mounted instruments. In part, the devices—many built at Cornell University—were designed to listen to the thousands of tugs, tankers, and cargo carriers that ply the Boston shipping channel, which cuts through the sanctuary. Other researchers have attached sensitive electronic “ears” to some of Stellwagen's humpback whales so as to understand how the whales respond to different kinds of sounds. And last year, scientists from Cornell and the Woods Hole Oceanographic Institution in Massachusetts unveiled a unique network of 10 buoy-mounted sensors that are specially tuned to pick up the calls of right whales and then warn passing ships in a bid to prevent deadly collisions.

    Together, such efforts have provided an unusually detailed sound portrait of this small sliver of ocean. Researchers, for instance, now know how the sanctuary's soundscape can vary by place, time of day, and ocean conditions. And they can track both passing ships and whales.

    The data are unsettling. One potential problem is that shipping noise consistently occurs at levels loud enough—and within frequencies low enough (between 10 and 1000 hertz)—to make it hard for whales to maintain acoustic contact, Hatch and her colleagues concluded in a 2008 paper in Environmental Management.

    The study, which combined acoustic data and individual ship movements, found that large tankers produced twice the acoustic signal of cargo ships and were more than 100 times noisier than research ships (which tend to be smaller and designed to be quieter). Because saltwater is such an efficient conductor of low-frequency sound, the researchers calculated that a single large vessel could transmit its sound throughout most of the sanctuary.

    One worry is that the din is essentially deafening North Atlantic right whales, of which fewer than 400 remain. In a quiet ocean, one whale's calls might be discernible to a mate or family member swimming 20 or more kilometers away. In noisy Stellwagen, however, that range is reduced, possibly down to just a kilometer or two, estimates a team led by acoustician Christopher Clark of the Cornell Lab of Ornithology (see figure, below). The whale's “communication space”—the three-dimensional bubble in which it can hear and be heard—“is seriously compromised by noise from commercial shipping traffic,” they concluded last year in an ocean noise special issue of Marine Ecology Progress Series.

    So far, it's hard to know if the cacophony has harmed the whales. Drawing on studies of chronic noise's impacts on land-dwelling mammals, however, some researchers predict shipping noise is increasing the animals' stress levels. A few studies have also shown that the whales call less in noisy areas. They may also be retuning their calls in a bid to be heard. Over the past several decades, for instance, the fundamental frequency of “contact calls” made by North Atlantic right whales has risen by about an octave, according to studies by bioacoustician Susan Parks of Pennsylvania State University, University Park. Although such adjustments might seem minor, researchers say they could carry significant survival costs if they even modestly reduce breeding or hunting success. Some fish may be similarly affected, but assessing noise effects in the wild is “extremely difficult,” says Popper.

    Hear me?

    Shipping noise may reduce the communication range (red circle) of some baleen whales, such as the North Atlantic right whale (inset), by 90% (top right).

    CREDITS: ADAPTED FROM C. W. CLARK/CORNELL UNIVERSITY; (MAP SOURCE) GOOGLE EARTH; (INSET) NOAA

    Silencing ships

    Despite the uncertainty, some parts of the shipping industry are taking the issue seriously. At a 2008 meeting in Hamburg, Germany—a hub for world shipbuilders—the Darmstadt-based Okeanos Foundation for the Sea persuaded several industry leaders to declare vessel noise “a global problem” and set a goal of freezing it at current levels within 10 years and then reducing it by several-fold within 30 years. In particular, the leaders called on the International Maritime Organization (IMO)—a United Nations group that oversees shipping—to convene a working group to explore technical options for quieting ships.

    The United States has since tapped Stellwagen's Hatch to advise its member of the panel. “The goal is to take a holistic approach to the rising level of ambient shipping noise, which everyone agrees is real,” she says. “If we can push down that overall level, we know we can increase the communication space for a lot of animals.”

    There are plenty of ways to make ships quieter, shipping experts have told IMO in written comments. The biggest need, they say, is to reduce the tendency of ship propellers to “cavitate,” or produce noise when tiny air bubbles that form around the blades burst. By redesigning the propellers—and changing the way water is funneled into them—shipbuilders could drastically cut sound production under normal operating conditions. Other gains could come from mounting noisy engines and machinery on sound-insulating platforms, streamlining boxy hulls now optimized for storage, and slowing cruising speeds. In addition to reducing noise, the experts say those changes would help companies reduce fuel use and pollution.

    “Essentially, a noisy ship is an inefficient ship,” says Hatch. “Until now, there haven't been good enough financial and environmental reasons not to drive a shoebox through the ocean. Now, there are a lot of good ones.”

    Retrofitting existing ships would be costly, however, according to an analysis commissioned by the International Fund for Animal Welfare in Geneva, Switzerland. Quieting an oil tanker, for instance, could cost $2.8 million, an expense the highly decentralized and often financially precarious shipping industry is unlikely to pay without new regulations. Yet few observers see new rules coming anytime soon. Silencing advocates say it may be easier to encourage shipbuilders to start building quieter vessels, especially if IMO can come up with some clear guidelines. But that could be “a decades-long process to update the current fleet,” says Hatch.

    Despite the slow pace, she's happy to see her field move beyond simply documenting the ocean-noise problem to trying to solve it. “We clearly need more science,” Hatch says. “But I think we can also start finding better ways of doing what we need to do in the ocean without creating unnecessary noise.” Like everyone, she's just seeking a little more peace and quiet.

    • * Second International Conference on the Effects of Noise on Aquatic Life, Cork, Ireland, 15–20 August 2010.

  13. News

    Down on the Shrimp Farm

    1. Erik Stokstad

    Can shrimp become the new chicken of the sea without damaging the ocean?

    New leaf.

    Shrimp farms have caused environmental problems such as the destruction of mangrove forests (above), but research in breeding and other areas has led to more productive farms that pollute less.

    CREDITS (LEFT TO RIGHT): GEORGE CHAMBERLAIN/GLOBAL AQUACULTURE ALLIANCE; ALFREDO QUARTO, MANGROVE ACTION PROJECT, WWW.MANGROVEACTIONPROJECT.ORG

    The blue waters of Kung Krabaen Bay in southwest Thailand are fringed with lush mangroves. Yet just a few hundred meters behind the trees are more than 200 small ponds teeming with shrimp that once threatened to cause severe environmental harm. What has been happening in Kung Krabaen Bay illustrates the peril and promise of farming shrimp, which have become the world's biggest marine aquaculture product.

    In the 1980s, poor rice farmers cut down many of the mangroves and dug out ponds to take advantage of an expanding market. Densely stocked, the ponds accumulated a sludge of feces and undigested feed that the farmers flushed into the bay, threatening to choke it. It looked like the farmers were “going to pollute themselves out of business,” recalls Claude Boyd of Auburn University in Alabama, who has studied shrimp farming and water quality in Thailand and elsewhere.

    Sludge dumping, along with other environ mentally harmful practices, gave shrimp farming a bad reputation. But in the past 15 years, many shrimp farmers have been cleaning up their act. Although motivated more by economics than environmental concerns, they have made substantial strides in many places to reduce their toll on the marine world, both locally and globally.

    With the help of researchers at the Kung Krabaen Bay Royal Development Study Center, for example, farmers in the bay have replanted the mangroves and switched to a new system for raising shrimp that releases far less pollution into the surrounding water. “It's really quite impressive,” Boyd says. Researchers around the world are refining this system, called biofloc technology, which relies on cultivating microbes to recycle nutrients and reduce waste.

    Academics and companies are also striving to improve processed shrimp feed and replace the fish meal it contains with other protein sources, a change that could help prevent further depletion of fish species at the base of oceanic food webs. Ultimately, by making shrimp aquaculture more productive—through breeding programs and perhaps high-tech inland farms—some researchers hope they can grow cheaper, more plentiful shrimp while sparing marine habitats altogether. “We can take shrimp farming out of the coastal zone,” predicts Addison Lawrence of Texas A&M University, Corpus Christi.

    Much remains to be done. Many of these improvements haven't reached smaller, poorer farms, which raise most of the world's shrimp. And because demand for seafood is going up dramatically, the overall impact on the oceans might still increase. Earlier this month, researchers and others gathered in Bangkok to draft a global strategy for sustainability in aquaculture production, which the Food and Agriculture Organization of the United Nations says will need to almost triple by 2050 to meet demands for freshwater fish and seafood. The challenge is how to do that with minimal environmental harm.

    Growing pains

    Shrimp farming has been going on in Asia for hundreds of years, but in the past 3 decades it has expanded exponentially. Worldwide production rose from less than 100,000 metric tons in 1980 to more than 3 million metric tons in 2007. Shrimp are now one of aquaculture's biggest products, exceeded in volume by only a few freshwater species, carp and tilapia.

    Much of the groundwork for modern shrimp farming was laid in Asia. In the 1950s and '60s, Japanese researchers made key steps toward raising the kuruma prawn (Penaeus japonicus) in captivity—not an easy task given the complex life cycle of shrimp: They reproduce in the ocean and mature in estuaries. Researchers in Taiwan later discovered that the tiger prawn (P. monodon) grows especially fast and was suited for commercial operations.

    Another advance was inducing spawning. Research in the 1970s showed that snipping just one eyestalk lowered the amount of a gonad-inhibiting hormone and coaxed females to spawn. Growing these broodstock in hatcheries eliminated the need to collect wild broodstock by trawling, which was depleting populations of wild shrimp and catching other marine life.

    Collateral damage from shrimp farming ramped up in the 1980s, when farmers sought cheap coastal land for new shrimp farms. Some governments encouraged the conversion of mangrove forests to ponds, and countless square kilometers of this productive coastal ecosystem were lost. Environmental groups protested, and farmers themselves soon discovered that mangrove forests were not a good place for shrimp farms because the soil was too acidic. Remaining mangroves are now largely spared, although destruction still occurs in some developing countries, says Steve Trent, who directs the Environmental Justice Foundation in London.

    The more serious problem has been shrimp farming's effect on water quality. As farms proliferated and production intensified—large farms in Ecuador have even used crop-dusters to feed shrimp—they released more and more effluent. Inefficient feeding methods compound the problem; unstable, low-quality feed breaks down before the shrimp eat it all. Farmers use more than needed, and the unused nutrients are washed out to sea.

    In 2007, Xie Biao and Yu Kaijin of the Nanjing Institute of Environmental Science in China reported in Ocean & Coastal Management that 43 billon tons of wastewater from shrimp farms enter China's coastal waters annually, compared with 4 billion tons of industrial wastewater. “Everyone agrees now that careless development of aquaculture has a dramatic impact on the environment,” says Yoram Avnimelech of Technion, the Israel Institute of Technology, in Haifa. “It has led to massive eutrophication.”

    Two cures in one

    A devastating shrimp disease helped shrimp farming turn a corner. In 1993, an outbreak of white spot syndrome virus (WSSV)—which can wipe out entire farms—caused production to collapse in China. After spreading through Asia, the pandemic reached South and Central America in 1999. WSSV and other viruses “forced tremendous change in the shrimp farming business,” says George Chamberlain of the Global Aquaculture Alliance, a trade group based in St. Louis, Missouri.

    One of those changes is the use of disease-free broodstock raised in biosecure hatcheries, reducing the pressure to collect wild shrimp for broodstock. Another disease-fighting advance: reducing water exchange between shrimp ponds and the environment, which spreads the virus. Research in the 1990s showed that water exchange could be eliminated if the ponds were aerated to provide the shrimp with oxygen.

    In aerated ponds, beneficial bacteria flourish and help supplement the shrimp's diet by recycling nutrients already in the pond. With the addition of a carbon source, such as wheat flour, these bacteria convert ammonium from shrimp waste into protein. The microbes glom onto floating particles, creating nutritious masses called biofloc that shrimp eat. Biofloc can reduce the need for protein supplements by as much as 50%.

    Waste not.

    High-tech farms (top) enable microbes to convert shrimp feces into protein-rich biofloc (white flecks, left) consumed by shrimp.

    CREDITS (TOP TO BOTTOM): THOMAS JAMES; GEORGE CHAMBERLAIN/GLOBAL AQUACULTURE ALLIANCE

    The technique is used fairly widely in intensive systems in Asia, where farms with higher yields can afford to aerate the water with internal pumps or paddlewheels. Even at more traditional farms, where aeration is too expensive, farmers can boost production with biofloc, says Avnimelech.

    Future farms

    Biofloc can reduce the need for protein in shrimp feed but not eliminate it. And that protein demand has had an environmental impact that extends beyond the shrimp farmer's local bay or coast.

    Shrimp consume 28% of the fish meal used in aquaculture. Demand for fish meal is depleting some of the world's stock of so-called forage fish. “Fish meal is a global issue,” says Craig Browdy, manager of aquaculture research at Novus International, a feed manufacturer based in St. Charles, Missouri. Researchers are pushing to use fish waste from canneries or to come up with landbased alternatives.

    Feed manufacturers have already created shrimp food based on soybean meal and other sources of vegetable proteins. A decade ago, shrimp feed consisted on average of about 20% to 30% fish meal; now it's between 10% and 20%. “That is a tremendous success story,” says Lawrence. Fish-free feeds are not cheaper, however, because of the expense of removing compounds that affect digestibility and then adding nutritional supplements.

    Despite the progress, skeptics worry that the net expansion of aquaculture will swamp the improvements. In terms of overall impact on ocean ecosystems, shrimp farmers are “not making any gains,” says Peter Bridson of the Monterey Bay Aquarium in California. The solution, according to aquaculture advocates, is not only to improve feed substitutes and intensify production even more but, ultimately, to move shrimp farming out of the coastal zone into concrete raceways covered with greenhouses.

    Experimental prototypes, and a handful that are in early phases of commercial operation, can reach yields of 100 tons per hectare per year—10 times the typical yield today. These superintensive biofloc systems could produce fresh shrimp year-round in temperate climates, but they require sophisticated engineering to control water temperature, oxygen levels, and water quality. How to do this and compete on price is a difficult question, given the cost of land, labor, and energy.

    Lawrence is optimistic, however, that research and engineering will lead to shrimp that are as cheap as chicken while less polluting. If the industry can do that, then it really will have earned a new reputation.

  14. News

    The Dirt on Ocean Garbage Patches

    1. Jocelyn Kaiser

    Their biological impact is uncertain and their makeup, misunderstood.

    Chances are you've heard of the great Pacific Garbage Patch. It is, according to countless press and TV reports, a “trash vortex,” “the world's largest rubbish dump,” and a “vast mass of floating debris” midway between Hawaii and California. According to Charles Moore, a sailor-turned-scientist who discovered the patch in 1997 and has been interviewed on The Oprah Show, the Late Show with David Letterman, and Good Morning America, it is a plastic soup twice the size of Texas.

    Although many media stories conjure up a chunky soup of bottles and tires, it is mostly an unstrained consommé of small bits of floating plastic. And the patch Moore found isn't the only one. A similar accumulation of plastic particles—which include weathered fishing line, Styrofoam, wrappers, and raw resin pellets—has shown up in the North Atlantic Ocean. But the potential harm to marine life is far from clear. “We just don't know the importance,” says biological oceanographer James Leichter of the Scripps Institution of Oceanography in San Diego, California, who points out that “there's a lot more water than plastic.”

    Accumulating tiny plastic debris was first discovered in 1972, when researchers at the Woods Hole Oceanographic Institution in Massachusetts found plastic particles up to 0.5 centimeters in diameter in their surface plankton nets in the North Atlantic's Sargasso Sea (Science, 17 March 1972, p. 1240). Since then, there have been a dozen or so similar reports mainly from the North Atlantic and the North Pacific.

    Trash traps.

    A modeler has predicted the spots (red, yellow) within five gyres where oceanic debris will wind up.

    CREDITS: N. MAXIMENKO, P. NIILER, J. HAFNER

    It was Moore who brought the problem to public attention. In 1997, he sailed from Hawaii to Long Beach, California, across a notoriously calm area, where for a solid week he spotted at least one bottle or piece of plastic every hour, he says. Moore went back with some scientists and a plankton tow net. In a 2001 paper in Marine Pollution Bulletin, they reported the highest average plastic count on record in the Pacific—334,271 pieces per square kilometer—and a startling 6:1 ratio of plastic to zooplankton by weight. They worried that the plastic was exposing animals to toxins, pointing to a Japanese study showing that polypropylene pellets can suck up pollutants from seawater.

    Independent Seattle oceanographer Curtis Ebbesmeyer, known for using spilled shipments of shoes and rubber ducks to study ocean currents, suggested that Moore had found a “garbage patch” within the North Pacific subtropical gyre—one of several major ocean gyres, or large, wind-driven, circular current systems with a quiet center.

    While scientists commend Moore's efforts to raise public awareness of marine pollution, some question the 6:1 ratio he came up with. Doubts have also been raised about the patch's size.

    Nevertheless, there's clearly a lot of plastic out there. When graduate students from Scripps spent 19 days in the same area last summer sampling sea life, they snagged plastic on every one of 126 plankton tows. Often, the half-liter jar of residue strained from a single 0.5-kilometer tow contained so many plastic chips that the jar “looked like a snow globe,” says graduate student Miriam Goldstein, who led the trip. “That is not normal.”

    Similar findings have come from off the U.S. East Coast. Last winter, the Sea Education Association (SEA), a nonprofit in Falmouth, Massachusetts, that takes students on sailing research trips, reported high—but consistent year-to-year—micro plastic counts over a 1450-kilometer transect in the western North Atlantic Ocean that SEA has been sampling for 22 years. Oceanographic modeler Nikolei Maximenko of the University of Hawaii, Manoa, had predicted this patch's location; he has pinpointed other likely patches in the Indian Ocean, South Pacific, and South Atlantic.

    Catch of the day.

    Plastic bits in the ocean (top) and collected from plankton tows in the Great Pacific Garbage Patch.

    CREDITS: SCRIPPS INST. OF OCEANOGRAPHY; D. LAURSEN/ALGALITA MARINE RESEARCH FOUND.; J. LEICHTER/SCRIPPS

    Whether the plastic is damaging marine ecosystems is, however, an open question. In the past, researchers have mostly focused on larger threats: abandoned fishing nets that trap turtles and seals; plastic bags that block the digestive tracts of turtles; and the toothbrushes and bottle caps that seabirds mistake for food, sometimes starving as a result or dying from a blockage. But toxin-laden microplastics may add another risk to marine life. Benthic worms, mussels, krill, sea cucumbers, and birds will ingest tiny plastic particles, according to various studies, some by marine ecologist Richard Thompson of the University of Plymouth in the United Kingdom. “There's quite a lot of signals out there that we need to be concerned,” says Thompson. But nobody has yet confirmed that significant amounts of chemicals wind up in animals' tissues.

    Nor does anyone know where all this plastic ultimately goes. Does it simply get too small to trap with a net? Does it sink to the sediments? Wash onto shore? (Large numbers of microplastics have been found on a Hawaii beach.) One worrisome find from the Scripps trip: a hatchetfish, a midwater dweller, with a plastic chip in its stomach. “It's going somewhere,” says SEA oceanographer Kara Lavender Law. She and others would like to find out where.

Log in to view full text