News this Week

Science  24 Jul 2009:
Vol. 325, Issue 5939, pp. 374
  1. South Korea

    Forensic Finds Add Substance to Claims of War Atrocities

    1. Richard Stone

    HAMPYEONG, KOREA—Early in the Korean War, the South's army was fighting on two fronts: against the North, which had invaded in June 1950, and against homegrown Communist guerillas. In February 1951, the army's 11th Division and police in the divided peninsula's southwest were closing in on guerillas holed up on Bulgap Mountain in Hampyeong County. Operation Full Moon—an assault on Bulgap—was planned for the night of 20 February. But the rebels caught wind of the impending attack and, knowing they'd be routed if they made a stand, slipped the cordon. In the meantime, villagers fleeing advancing troops had sought refuge on Bulgap. When soldiers and police stormed the ridge and found only civilians, survivors claim, they dug a long trench, forced the civilians to kneel inside, and then shot them or thrust sharpened bamboo sticks down their throats. Women and children were among the victims.

    That's the story South Korea's Truth and Reconciliation Commission (TRCK) heard when it began to investigate the Bulgap massacre last year. “It was controversial,” says sociologist Kim Dong-Choon, a commissioner and director of the Human Rights and Peace Center of Sungkonghoe University in Seoul. Some commissioners, he says, doubted the recollections of elderly survivors who had lived in Hampyeong when the atrocity occurred but had not witnessed it. “A few of my colleagues thought the victims really were guerillas,” says Kim.

    But the survivors have been vindicated by new forensic finds. Last week, TRCK revealed evidence from an ongoing excavation confirming Bulgap as one of the Korean War's darker chapters. Anthropologist Noh Yong-Seok, who leads TRCK's excavations with help from forensic scientists at Chungbuk National University, says the team has found adult skeletons bent at the knees with finger bones clasped behind their skulls and artifacts that rebel soldiers would have had no use for, such as a woman's hairpin and toys. Investigators here have unearthed bones of several children, the first such verified remains from a Korean War–era massacre site, says Noh.

    The Hampyeong dig is one of the more stunning revelations to date from the truth commission, which in the past 3 years has documented mass killings in the war's early years of more than 100,000 South Korean civilians, deemed “traitors,” by the country's own military and police. Hundreds of thousands more were slain by indiscriminate aerial bombing and ground fire by U.S. and Korean forces, says Kim. “We know now that the Korean War was like a war with no frontline. Everyone was killing each other, whoever was beside them,” says TRCK President Ahn Byung-Ook, a historian at Catholic University of Korea in Seoul.

    Digging up the truth.

    Excavation leader Noh Yong-Seok holds a woman's hairpin, evidence of a civilian massacre.


    As TRCK rewrites history books, it is rushing to wind up investigations before its legal mandate expires in April 2010. President Lee Myung-bak's conservative Administration has not hidden its disdain for the truth commission, which was created by the liberal government of his predecessor, Roh Moo-hyun. “We would prefer to lay the past to rest and solve today's problems, like our economic crisis,” an Administration official told Science.

    Commission officials insist it will be impossible to wrap up active investigations by April. That leaves sociologists wondering whether South Koreans—especially those born after the war ended—will ever come to terms with the incidents TRCK is uncovering. “I don't blame younger generations for lack of interest,” says Ahn. “But I expect them to learn a more accurate account of history.”

    Untold stories

    It is monsoon season in Korea, and after an early downpour, wispy fog clings to Hampyeong's hills and rice paddies. Paintings of butterflies adorn bus stops and farm outbuildings, and an entire hillside's vegetation is trimmed neatly in the shape of a butterfly. These days, Hampyeong is known for its annual butterfly festival—that and for onions, umpteen numbers of which are stacked in bales along the roads.

    After an arduous half-hour slog up a muddy path on Bulgap Mountain, Chang Jae-soo helps two college-age excavators peel back a blue tarp brimming with standing water. The 74-year-old, who lived in a nearby village 6 decades ago, has remarkable stamina—perhaps, he says, because the exhumation has revitalized him. “I feel like I am getting my honor back,” he says.

    A row of tarps, each about 3 meters wide, extends for 50 meters or so along the ridge. Noh's team has been excavating here for about a month. So far they have unearthed remains of about 100 individuals from the shallow grave. Noh crouches and reaches behind a skull and picks up a humerus, several centimeters long, that Chungbuk National University scientists estimate belonged to a 12-year-old child. Spent bullet casings lie scattered among the bones, as do some rusted spoons. “Spoons are one possession the villagers surely would have brought with them. They had to eat,” says Noh. He lays the bone down gently. Nearby is a tiny pale blue ball—a marble. “This is one of the few toys children back then would have had.” They have found bones here of toddlers as young as 3 or 4.

    Although some victims here may have been guerillas, forensic evidence of a civilian massacre, Noh says, is now incontrovertible. About a quarter of the dead were women and children. Noh has led TRCK exhumations at 13 sites that so far have yielded remains of about 1700 people; no other site has such young victims. Hampyeong, he says, “is the most tragic one of all.”

    It's not surprising that villagers would have fled to Bulgap. A month before the Hampyeong incident, troops shot dead Chang's parents, his uncle and aunt, and three younger sisters. Chang, 16 at the time, escaped and later found his way to an elder sister's home across the peninsula. For years after the armistice, Korean society spurned people deemed to be Communist sympathizers—even those like Chang who had nothing to do with the guerillas but were tainted merely by having lost family members in massacres. “People hardly got a chance to get an education or a job,” says Chang, who made ends meet as a traveling salesman.

    Seeking closure.

    Seven members of Chang Jae-soo's family were killed.


    Under the military dictatorships that ruled after the war, Noh explains, “we were taught to not question the government or the army.” At one notorious massacre site, Gyeongsan Cobalt Mine, victims were shot and dumped down shafts. Noh's crew there has so far recovered remains of 240 of an estimated 3500 victims. “Many people knew about the cobalt mine, but for 50 years they didn't say anything. It was taboo,” says Noh, who grew up in the area. In the 1990s, a rising chorus called for an investigation into alleged crimes against civilians, ranging from when Korea was a colony of Japan before World War II, the Korean War era, and up through democracy movements lasting into the 1980s. Roh bowed to pressure and established the commission in December 2005. Roh, who committed suicide 2 months ago, apologized to victims on behalf of the nation in January 2008. “It was the first time in our history that the government acknowledged the war's civilian victims,” says Ahn.

    TRCK's creation opened a floodgate: More than 10,000 petitions have poured in, and several thousand cases are still pending. Investigators have interviewed hundreds of victims and alleged perpetrators; many of the latter have shown little or no remorse. “Offenders justify what they did as inevitable,” says Ahn. However, he notes, “our aim is not to put perpetrators on trial. It is to reconstruct the past through the victims' eyes.”

    Last December, Chang and others formed the Bulgap Mountain–Yongcheon Temple Provincial Union of Bereaved Families of Civilian Victims—one of 100 or so such unions across the country. The union plans to sue the government for compensation once the excavation is finished. But more important than money, Chang insists, is that he and other victims are finally putting the bitter years of postwar discrimination behind them. Using words one would expect from a perpetrator, he says, “We're not seen as sinners anymore.”

  2. Career Support

    DOE Puts Up $85 Million for Grants to Young Scientists

    1. Jeffrey Mervis

    The U.S. Department of Energy (DOE) has committed $85 million next year for a new program to help 50 young scientists establish their research careers. It is the latest federal agency to try to lend a hand to this vulnerable population.

    The program ( is financed by the $1.6 billion given to DOE's Office of Science earlier this year in the government-wide stimulus package. As such, the money must be spent quickly, with the expectation that it will provide jobs and help pull the nation out of its economic doldrums. DOE officials say the program also responds to the recommendations of several expert panels, including the 2005 National Academies' Rising Above the Gathering Storm. Several other agencies already have programs that target this population, including a greatly expanded effort in the past 2 years by the Department of Defense (Science, 14 November 2008, p. 1037).

    The 5-year awards will be divvied up between tenure-track university scientists and those holding full-time jobs at a DOE national laboratory. Applicants must be within 10 years of having received their Ph.D. Awards for academic scientists can range up to $750,000, whereas lab scientists are eligible for as much as $2.5 million. (DOE will be paying only summer salaries for university researchers, while it foots the entire salary for its lab employees.) The awards “are meant to support proposals from individual PIs [principal investigators], in areas that serve DOE's mission,” explains DOE's Linda Blevins, who will manage the program within the Office of Science.

    Applicants are asked to submit a letter of intent by 3 August, Blevins says, so that DOE can get an idea of the level of interest and the nature of the proposals to be reviewed. “We really don't know how many applicants we'll get, but it's certainly generated a lot of interest from the community,” she adds. Full proposals are due by 1 September, with awards to be announced next spring.

    Although the stimulus money will cover only the first round of awards, Blevins says “we anticipate growing the program” in subsequent years if DOE receives adequate funding from Congress.

  3. Climate Change

    Clouds Appear to Be Big, Bad Player in Global Warming

    1. Richard A. Kerr

    Climate researchers have long viewed clouds' reaction to greenhouse warming as the key to understanding the world's climatic fate. As rising carbon dioxide strengthens the greenhouse, will some clouds thicken and spread, shading the planet and tempering the warming? Or will they thin and shrink, letting in more sunshine to amplify the warming? The first reliable analysis of cloud behavior over past decades suggests—but falls short of proving—that clouds are strongly amplifying the warming. If that's true, then almost all climate models have got it wrong.

    The new study “confirms with observations that low clouds are critical for the climate system's response,” says climate modeler Gerald Meehl of the National Center for Atmospheric Research in Boulder, Colorado. But “it's really a challenge for models” to simulate that response, he adds. If real-world cloud amplification works the way the study indicates, researchers say, global warming could be even worse than the typical model predicts.

    Clouds have been a climate conundrum in part because no one has been keeping an eye on them the way the weatherman has been recording temperature for more than a century. On page 460, climate researcher Amy Clement of the University of Miami in Florida and colleagues consider the two best, long-term records of cloud behavior over a rectangle of ocean that nearly spans the subtropics between Hawaii and Mexico. Other researchers had compiled one of the records from eyeball estimates of cloud cover made by mariners who passed through the region from 1952 to 2006. The other record, which runs from 1984 to 2005, came from satellite measurements, which Clement and her colleagues adjusted to account for calibration shifts from one satellite to the next.

    Leaky clouds.

    Decades-long records show that when sea surface temperature (SST) warms, cloud cover—especially from low clouds (bottom)—decreases (blues, top), letting in more sunlight.


    Between them, the observations recorded the two major climate shifts that roiled the North Pacific during the periods they covered. In a warming episode that started around 1976, the ship-based data showed that cloud cover—especially low-altitude cloud layers—decreased in the study area as ocean temperatures rose and atmospheric pressure fell. One interpretation, the researchers say, is that the warming ocean was transferring heat to the overlying atmosphere, thinning out the low-lying clouds to let in more sunlight that further warmed the ocean. That's a positive or amplifying feedback. During a cooling event in the late 1990s, both data sets recorded just the opposite changes—exactly what would happen if the same amplifying process were operating in reverse. “All of the elements of a positive feedback are there,” Clement says.

    Even so, positive low-cloud feedback was only a supposition until the group looked at another sort of satellite measurement of the second natural climate shift. That showed that when decreasing cloud cover let the sun leak through, the additional solar heating was large enough to account for much of the ocean warming. A positive feedback operating in the decades-long climate shifts “is real,” Clement concludes. And other studies link cloud changes in the northeastern tropical Pacific to atmospheric changes across the Pacific.

    But is such a feedback actually working to amplify global warming? To get some indication, Clement and her colleagues checked the archives of a study in which the international Coupled Model Intercomparison Project compared the results of 18 global climate models run under standardized conditions. Clement and her colleagues tested whether each model was properly simulating each element of the positive cloud feedback they had found in the northeastern Pacific.

    When the results were in, only two models showed low clouds producing a positive feedback as observed. One of them stood out from the pack. The HadGEM1 model from the U.K. Met Office's Hadley Center in Exeter produced patterns of warming and circulation changes during greenhouse warming that resembled those of all 18 models averaged together—the best guide available. The group also concluded that HadGEM1's simulation of meteorological processes in the lowermost kilometer or two of the atmosphere—where the key low-lying clouds reside—is particularly realistic.

    As it happens, the HadGEM1 model is among the most sensitive of the 18 models to added greenhouse gases. When carbon dioxide is doubled, the model warms the world by 4.4°C; the median of the models for a doubling is 3.1°C. That gap raises a red flag for Clement. “We tend to focus on the middle of the range of model projections and ignore the extremes,” she says. “I think it does suggest serious consideration should be given to the upper end of the range.”

    Climate researchers agree that Clement and her colleagues may be on to something. “There's been a gradual recognition that this rather boring type of [low-level] cloud is important in the climate system,” says climate researcher David Randall of Colorado State University, Fort Collins. “They make a good case that in [decadal] variability there is a positive feedback. The leap is that the same feedback would operate in global climate change.” The study tends to support an important role for marine low clouds in amplifying global warming, he says, but it doesn't prove it.

    One clear contribution of the study, Randall says, is to point the way toward more reliable climate models. The paper “is definitely a reasonable approach to deciding which models to pay the most attention to,” he says. In its previous international assessments, Randall notes, the Intergovernmental Panel on Climate Change assumed that all models are created equal. “I think we have to get away from that.”

  4. Environmental Monitoring

    Spy Satellites Give Scientists A Sharper Image of Field Sites

    1. Yudhijit Bhattacharjee

    Hajo Eicken has spent the past 10 years monitoring the freezing and melting of sea ice along the 900 kilometers of northern Alaskan coastline. Each winter, the geophysicist at the University of Alaska, Fairbanks, visits a handful of field sites and fills in the gaps with satellite images. But last week, thanks to the U.S. intelligence community, he saw pictures that will allow him to model the breakup at a level of detail he never imagined.


    Newly released images from intelligence satellites are helping researchers track sea-ice melting in the Arctic.


    The boon to Eicken's research—plotting the interplay of ocean currents, tides, and winds—comes courtesy of the National Civil Applications Program at the U.S. Geological Survey (USGS). For the past decade, the program has maintained a library of scientifically useful images taken by intelligence-gathering satellites that have been available only to federal scientists with security clearances. Last week, USGS posted 1199 pictures, taken at 22 environmentally significant sites across the continental United States and six around the North Pole, with a resolution up to 30 times sharper than what Eicken could obtain from commercial satellites ( The new images will enable researchers to “chart the progression of ice decay in great detail,” Eicken says, to the level of “individual melt ponds.”

    The U.S. government began collecting these images in the late 1990s under a program called Medea. With few exceptions, the images have been available only to federal scientists. Last fall, as part of a planned release, the intelligence agencies asked the U.S. National Academies whether producing unclassified versions of the images—essentially by stripping out information that might telegraph the satellites' technological prowess or jeopardize national security—would be worthwhile to scientists.

    The panelists were blown away. “What we saw was unbelievable,” says Stephanie Pfirman, head of the environmental sciences department at Barnard College in New York City and chair of the panel. The panel recommended the release of the images. On the eve of the report's publication, the panelists briefed intelligence officials, who promised that the images would be available the very next day. Two hours after the report was released, they were posted. “The response was faster than any of us had expected,” says Thorsten Markus, a researcher at NASA's Goddard Space Flight Center in Greenbelt, Maryland, who served on the panel along with Eicken and three other scientists.

    “We are thrilled that this imagery is coming out,” says Peter Jutro, an Environmental Protection Agency official who sits on the Civil Applications Committee and was a member of the Medea program. James Devine, a senior adviser to the USGS program, says “we are in the process of preparing several hundred more” images, with funding from the intelligence community.

    The panel also recommended that the satellites expand their breadth by tracking moving ice as well as fixed sites. “The community is going to want more,” says Pfirman. “It would be great to add dynamic image collection.”


    From Science's Online Daily News Site

    Exploding Raindrops Here's a question for a rainy day: How do clouds create such a wide variety of raindrop sizes? The answer, according to stunning new high-speed movies, is much simpler than physicists thought. Drops don't crash into each other as they fall; rather, they balloon like a parachute, then explode. Watch a movie here.

    No Risk in Disclosing Genetic Risks Sitting in your doctor's office, you get the bad news: Thanks to a faulty gene, you're 15 times more likely than the average person to develop Alzheimer's disease. But despite the diagnosis, you're unlikely to become more anxious or depressed within the next year, according to a new study.

    Ancient Climate-Change Puzzle Carbon dioxide gets a bad rep for contributing to global warming, and deservedly so. But scientists say they can't entirely blame the greenhouse gas for a curious spike in Earth's temperature 55 million years ago. New research reveals that something else also seems to have warmed the planet during that time, though no one's quite sure what it was.

    Elephants Don't Always Keep It in the Family In elephant society, nothing is more important than family. From traveling packs of mothers and calves to larger groups that include aunts and cousins, all segments of the creatures' complex social structure are typically composed of relatives. But what happens when these populations are decimated by humans? New research reveals that elephants sometimes bring in nonkin to keep their social groups viable.

    Read the full postings, comments, and more on

  6. Geodesy

    NOAA Project to Measure Gravity Aims to Improve Coastal Monitoring

    1. Brittany Johnson
    Old and new.

    This NOAA Cessna will fly around the country toting the gravimeter, an improvement over the traditional method of surveying and siting National Geodetic Survey markers.


    How high is up? The U.S. government has launched a 10-year, $38 million research project to answer that question in hopes of improving its management of coastal regions and reducing the damage from severe storms and rising sea levels.

    Calculating the elevation of any point on the globe isn't a simple undertaking. The traditional method relied upon surveyors traipsing around the country, repeatedly lining up a telescope on a tripod with a leveling rod to measure the position difference between two points of reference with respect to sea level. Those efforts have produced the National Spatial Reference System (NSRS), a network of more than 1.5 million survey markers throughout the United States that record latitude, longitude, and elevation above sea level. Federal and local agencies use NSRS to help with construction plans, mapping, floodplain management, tropical storm preparation, and many other endeavors.

    Unfortunately, changes in Earth's surface—from earthquakes and other factors—erode the accuracy of those markers over time. The current NSRS contains elevation errors ranging from 40 centimeters to nearly 2 meters, says Juliana Blackwell, director of the National Geodetic Survey (NGS), a branch of the National Oceanic and Atmospheric Administration (NOAA) that operates the reference system. “The accuracies are not what we'd like them to be,” Blackwell says.

    A newer technology, the Global Positioning System (GPS), also generates elevation data—but of the wrong sort. GPS instruments provide what's known as ellipsoid height, based on a model of Earth as a smooth, flattened sphere. Although that's a good fit for the actual shape of the planet, heights measured from it are not useful for generating a topographic map that will show how water flows, for example.

    To do that, you need an accurate depiction of Earth's gravity field, which represents how the pull of gravity varies from place to place because of the planet's irregular surface and makeup. “Water will always follow the gravity field because the gravity field predicts water flow,” says NGS's Vicki Childers. “The gravity field helps determine sea level. And sea-level surface is used to base measurements on.”

    Measuring the gravity field allows scientists to update their model of the geoid—a fictional surface warped so that Earth's gravity is constant everywhere on it. In turn, the geoid provides a basis to measure a point's orthometric height (orthos being Greek for straight or correct).

    The new project is called Gravity for the Redefinition of the American Vertical Datum (GRAV-D). The key instrument is an airborne gravimeter coupled with GPS. Placed inside NOAA's Cessna Citation II jet, the gravimeters will measure the acceleration of gravity at the same time that GPS instruments aboard NASA's Gravity Recovery and Climate Experiment Satellite measure the plane's vertical acceleration. Thus, equipped researchers can produce large-scale regional land and sea surveys without the inconsistencies of leveling, says Childers, project manager for GRAV-D.

    “[GRAV-D] is a logical progression,” says Thomas Herring, a geophysicist at the Massachusetts Institute of Technology in Cambridge. He says it makes no sense for NGS to rely on a “labor-intensive technique” like leveling when something better exists. “It's part of transforming from historical methods to much less expensive and more reliable methods with GPS,” Herring says.

    Under the first phase of GRAV-D, NGS carried out airborne surveys over Puerto Rico in January and conducted test flights last year in Alaska and along the coast of the Gulf of Mexico. NGS didn't receive money last year to ramp up the project. So NOAA tried again this year, including $4 million for GRAV-D in its request for fiscal year 2010, which begins in October. Although the budget process is still under way, last month spending panels in both the House of Representatives and the Senate exceeded NOAA's overall request for research, all but ensuring that the project can proceed.

    Each leg of the survey takes about a month, Childers says. The project is expected to take 10 years to complete, beginning with the coastal areas and then extending inland. When it's done, scientists will have a new geoid model from which orthometric heights can be calculated more accurately.

    Those heights offer crucial information to public officials responsible for civil infrastructure—roads, buildings, bridges, airports, and other structures along the country's vast coastline and inland floodplains. A recent study NOAA commissioned estimated that the new geoid model will save the country $3.6 billion over 15 years through improved flood control and other steps taken as a result of better knowledge of the height of low-lying areas.

    More precise measurements alone would not have saved New Orleans and the Gulf Coast from the ravages of Hurricane Katrina. But NGS scientists say that arming public officials with more accurate height data about the regions they must manage is bound to save lives and money in the long run.

  7. Biofuels

    ExxonMobil Fuels Venter's Efforts To Run Vehicles on Algae-Based Oil

    1. Robert F. Service

    The floodgates are officially open. After years of skepticism about the promise of biofuels, ExxonMobil has decided to make one of the biggest biofuel bets so far.

    Last week, the world's second largest company announced that it will spend up to $600 million over 5 to 6 years to produce biofuels from algae. Half the money will go to Synthetic Genomics Inc. (SGI), a San Diego, California–based start-up run by genomics pioneer J. Craig Venter. Exxon will spend another $300 million on in-house research including attempts to scale up biofuels production from algae and refine the resulting oils into finished fuels. “This is not going to be easy, and there are no guarantees of success,” says Emil Jacobs, vice president of research and development for Exxon's research division.

    Venter says SGI has already reengineered some strains of algae to secrete hydrocarbons from their cells. That achievement allows them to produce the compounds continuously in bioreactors. That's a big advantage over traditional “farming” methods, in which algae are grown in shallow ponds from which they are harvested and split open for oil that is then refined into fuel. The company plans to build a large facility in San Diego to test thousands of algal species in hopes of finding or reengineering strains to turn out oils at a high rate. Jacobs says Exxon's investment hinges on SGI's meeting a series of research milestones.

    Chemical plant.

    ExxonMobil will pay SGI to make biofuels from algae.


    “The deal is tremendous for biofuels,” says Bill Haywood, CEO of LS9 Inc., a biofuels start-up based in South San Francisco, California. “A move like that tells me they are serious about the future of biofuels.” Exxon is the last major oil company to move into biofuels, notes Divya Reddy, an energy and natural resources analyst in Washington, D.C., with the Eurasia Group, an international policy risk assessment firm. The size of its investment “solidifies that trend,” she says.

    The technology's backers say algae have inherent advantages over other biofuels. They can be grown on nonagricultural land, absorb carbon dioxide, and the oils they produce can be refined into conventional transportation fuels that can be distributed using existing infrastructure. The market has begun to recognize those strengths: In 2006, Algenol Biofuels, a Florida-based company, announced an $850 million project with Mexico's Sonora Fields S.A.P.I. de C.V. to produce ethanol from algae. More recently, two algae fuel makers, Sapphire Energy in San Diego and Solazyme in South San Francisco, have raised $100 million and $76 million, respectively, from venture capitalists.

    But although the area is hot, many obstacles remain. Among the biggest are reengineering algae to produce more hydrocarbons and to make molecules that more closely resemble refined gasoline. But Stephen Mayfield, a cell biologist at the Scripps Research Institute in San Diego, California, and a founder of Sapphire Energy, thinks the tallest hurdle is scaling up the process. “That's a key component Exxon will bring to this deal,” Mayfield says.

    The race to market has already begun. LS9, which uses engineered Escherichia coli instead of algae to make fuel, hopes to open a large-scale production facility in Brazil by 2013. Aurora Biofuels in Alameda, California, expects to have a commercial algae biodiesel facility online in 2012, and Algenol plans to begin selling fuel from its facility in Mexico later this year.

    Mayfield and others aren't worried about the competition. Given the global $1-trillion-a-year market for energy, Mayfield says the more biofuels players, the better. “We will use every molecule of renewable energy we can get,” he says.

  8. ScienceInsider

    From the Science Policy Blog

    India's new environment minister, Jairam Ramesh, says India will not agree to mandatory reductions in carbon emissions. Ramesh spoke at a news conference with visiting U.S. Secretary of State Hillary Clinton. “We are simply not in a position to take over legally binding emission reduction targets,” Ramesh said. Clinton said the United States would not impose targets on India or “do anything that would limit India's economic progress.”

    The planned restart of the Large Hadron Collider will be pushed back from September to at least November because of newly discovered “vacuum leaks” in two sections of the accelerator. CERN disclosed the bad news in the latest issue of its online newsletter.

    Thirty-four U.S. Nobel laureates last week called on President Barack Obama to push for a steady funding mechanism in upcoming climate legislation to support clean energy research. Many billions of dollars are already flowing from stimulus funding, they note in a letter to Obama, but the money will run out in October 2010. The Nobelists want the president to follow through on an earlier pledge to spend $15 billion a year for 10 years for research on clean energy.

    Norman Augustine, who is chairing a blue-ribbon panel examining alternative futures for the U.S. human space-flight effort, hinted last week that the panel might shy away from advocating an expensive human mission to the moon, Mars, or an asteroid. Recommendations to the White House are expected by the end of August.

    The Chinese government has banned the controversial application of electroconvulsive therapy for so-called Internet addiction after a clinic gained notoriety for applying electric shocks to unanesthetized teenagers being treated against their will (Science, 26 June, p. 1630).

    For other science policy news, go to

  9. Imaging

    With ‘Phenomics,’ Plant Scientists Hope to Shift Breeding Into Overdrive

    1. Elizabeth Finkel*

    MELBOURNE, AUSTRALIA—Last May, a scrawny grass, Brachypodium distachyon, joined the exclusive club of plants whose genomes have been sequenced. Brachypodium may look unassuming, but under the hood it is a geneticist's dream. It has a short life cycle and a small genome with one pair of chromosomes (wheat, for instance, has three pairs) that readily reveals the effects of genetic modification. In short, the temperate grass is a superb model organism for cereals like wheat and rice and for biofuels like switchgrass. But Brachypodium's genome alone cannot revolutionize plant breeding.

    Enter plant phenomics. Borrowing imaging techniques from medicine, phenomics offers plant scientists new windows into the inner workings of living plants: infrared cameras to scan temperature profiles, spectroscopes to measure photosynthetic rates, lidar to gauge growth rates, and MRI to reveal root physiology. “Phenomics will give plant scientists the tools to unlock the information coded in genomes,” says Mark Tester, director of the Australian Plant Phenomics Facility (APPF), a new $40 million venture with headquarters in Adelaide.

    A-maizing transformation.

    In this maize leaf, laser confocal microscopy reveals a clear distinction between high activity of photosystem II in mesophyll cells (pink fluorescence) and low activity in bundle sheath cells (purple)—a distinction typical of C4 plants. The green fluorescence comes mainly from lignin in cell walls.


    Institutes worldwide are racing to build facilities with instrument arrays that can scan thousands of plants a day in an approach to science akin to high-throughput DNA sequencing. “This will allow plant physiology to ‘catch up’ with genomics,” says Tester. APPF, the first national lab of its kind in the world, consists of two nodes. One is a High Resolution Plant Phenomics Centre (HRPPC) in Canberra, which opened last week. A debut project of the center is an international collaboration to screen Brachypodium variants for drought-tolerance and for less lignin in their cell walls. The second node is the Plant Accelerator in Adelaide, a screening facility that Tester will run and aims to get online by December.

    “Australia is leading the way,” says David Kramer, a spectroscopist at the Institute of Biological Chemistry at Washington State University, Pullman. But other countries are ramping up fast. In Germany, for example, the Institute for Phytosphere Research (IPR) in Jülich in 2007 established the Jülich Phenomics Centre, which carries out a variety of screens and is developing root imaging. And the Leibniz Institute of Plant Genetics and Crop Plant Research in Gatersleben is carrying out high-throughput screening on crops such as barley and wheat using instruments such as infrared cameras to chart transpiration and fluorescent microscopy to assess photosynthesis.

    From feel to phenomics

    Plant breeders are known for their “feel”: the ability to select subtle traits that enhance a plant's performance. They might key in on the way a plant curls its leaves, for example, or a particular shade of green whose significance escapes the uninitiated. Such craftwork delivered enormous agricultural gains through the mid 1990s. But with yields of many crops having hit plateaus, green thumbs are no longer enough. Modern plant breeders need the equivalent of a watchmaker's magnifying glass and tweezers to tinker with complex and intertwined traits. Phenomics, says Uli Schurr, director of IPR, promises to usher in “precision agriculture and predictive breeding.”

    One trait Australia has in its cross hairs is salt tolerance. High salinity affects one-fifth of the world's irrigated land and two-thirds of Australian cereal crops. But selecting plants for salt tolerance has generally flopped. “We can end up with plants like mangroves that are salt-tolerant but slow growing and not much use,” says Tester. He and Rana Munns of CSIRO Plant Industry in Black Mountain, Canberra, recently found that salt tolerance is connected with a plant's ability to resume growth after osmotic shock—the shutdown in cell growth occurring fractions of a second after a plant is exposed to high salt concentrations.

    Exploiting that trait, Tester's lab earlier this year used a three-dimensional camera to record minute changes in growth responses after wheat plants transplanted into salty soil went into osmotic shock. After crossing varieties and laboriously screening hundreds of plants, one of Tester's Ph.D. students, Karthika Rajendran, appears to have pinpointed a gene that helps plants resist osmotic shock.

    Such mind-numbing screening will be automated and sped up when the Plant Accelerator roars to life. It will have a throughput of 2400 plants a day—10 times the capacity of current labs. Plants will travel by conveyer belt to stations that measure growth rate and color, a sign of tissue health.

    Also expected to get a phenomics boost is an effort to replace rice's inefficient C3 photosynthetic pathway with the C4 pathway found in maize and 40 other plant species. For the same input of water and nitrogen, maize produces twice the carbohydrate content of rice. Researchers have tried to assemble the C4 pathway in rice using maize enzymes, but the efforts failed, perhaps because rice's subcellular structure prevented the enzymes from working in synchrony.

    Phenomics tools can provide snapshots of cellular structure and diagnose steps along the way toward C4 metabolism in live plants. The International Rice Research Institute (IRRI) in Los Baños, Philippines, is screening rice varieties for those with a cellular architecture best suited to house the C4 enzyme assembly. In C4 plants, mesophyll cells turbocharge photosynthesis by delivering carbon dioxide at 10 times atmospheric concentration to sugar-producing bundle sheath cells that surround the veins. For this transfer to work, there can be no more than three to four cells between the veins. IRRI's initial screens are for rice varieties with narrower spacing between the veins. Promising varieties will be shipped to HRPPC, which will use fluorescence microscopy and other tools to interrogate the plants. One approach is to look for a fluorescent signature from a complex of light-absorbing molecules called photosystem II: If these molecules' activities are muted in bundle sheath cells, it means the plant is becoming more C4-like.

    Energy efficient?

    Chlorophyll fluorescence, a measure of photosynthesis, in Arabidopsis seedlings and a wheat ear (inset). Blue depicts high photosynthetic rate, red depicts low. Large numbers of plants can be screened for stress response, for example, or for high yield.


    Kramer's lab in Pullman, meanwhile, is examining the split-second reactions that capture some energy of impinging sunlight while dissipating more than 80% to prevent damage. “If we could reduce this regulatory loss by 1%, we might double yields,” he says.

    Plants vary in how much light they dissipate. The photosynthetic rate of giant cane (Arundo donax), a potential biofuel source, is many times that of rice. Most crops are less efficient and “far too conservative for biofuels use,” Kramer says. “They're playing it too safe.”

    Breeders have never seriously attempted to rev up the photosynthetic rate, says Kramer. Indeed, some high-yield rice varieties have reduced photosynthetic rates. The problem is that the switches that set the photosynthetic rate are poorly understood. To penetrate this mystery, Kramer plans to use the equivalent of a car engine dynamometer: a spectrometer that gauges the photosynthetic engine. The cogs of the engine—carotenoids, plastocyanins, and cytochrome complexes—emit unique spectral signatures in their excited states.

    Kramer hopes to probe the “wear and tear” costs of photosynthesis with an “idea spectrometer,” a device his group designed that is inspired by the “tricorder” of Star Trek fame. At the moment, the spectrometer, the size of a pint of beer, must be wired to a leaf; the goal is to be able to wave it over a plant like the fictional tricorder. Kramer says he would freely provide the device to colleagues to compile a global database of plant performance. “We can push our crop plants harder,” says Kramer. “The question is how far.”

    Kramer and other adherents think the emerging discipline of phenomics will help foment the next green revolution. We now have the tools “to make quantum leaps in crop breeding,” says plant physiologist Robert Furbank, director of HRPPC. “These are the tools we need to feed and fuel the world.”

    • * Elizabeth Finkel is a writer in Melbourne, Australia.

  10. Scientific Publishing

    Data Integrity Report Sends Journals Back to the Drawing Board

    1. Jocelyn Kaiser

    It seemed like a good idea: With digital data routine in nearly every scientific field, and growing concerns about doctored images and demands to share data, why not convene a set of experts to come up with general data-handling guidelines? But a National Academies panel found this task impossible. Instead, its report, released today, offers broad principles for dealing with data but calls on disciplines to work out the details themselves.

    One trigger for the study was a particularly egregious case of scientific fraud: the faking of stem cell data by South Korean researcher Woo Suk Hwang, including cut-and-pasted cell images in a 2005 paper in Science. Faced with other examples of data manipulation, a group of journal editors asked the academies for advice in 2006. Academies officials added a second controversy they considered related: demands from a global-warming skeptic in Congress for data from the scientists who published the so-called hockey stick paper in 1998 in Nature that shows rising global temperatures since 1900. Then the academies convened 17 experts in fields from physics to sociology to look at issues of treating, sharing, and archiving research data.

    Their conclusions, described in a report* this week (see Editorial, p. 368), boil down to three “principles” that are as uncontroversial as motherhood and apple pie: Researchers are responsible for ensuring the integrity of their data; data from published papers should be publicly accessible; and data should be properly archived.

    The report also offers 11 recommendations urging scientists, institutions, journals, and other players to develop standards and provide proper training. The suggestions include a few new points—for example, data-sharing should include not just the raw data but also the computer programs used to analyze it. But there are no detailed guidelines.

    The problem was that every time a panelist made a detailed proposal, another member would say it would not work in their field, says co-chair Phillip Sharp, a molecular biologist at the Massachusetts Institute of Technology (MIT) in Cambridge. So the committee couldn't be too specific, Sharp says. For example, says co-chair and emeritus MIT physicist Daniel Kleppner, astrophysicists don't really have issues with image manipulation because they work with public data sets, and if someone doctors an image, colleagues “can go right back and look” and catch it.

    Journal editors seem a bit disappointed. The National Institutes of Health's Kenneth Yamada, an editor of The Journal of Cell Biology, which has worked out ways to screen for manipulated images that other journals have followed, calls the report's principles “an excellent foundation on which fields can build.” But he suggests a “Phase Two” academies study focusing on digital imaging in biology. Katrina Kelner, managing editor, research journals at Science, who calls the report “welcome” and the principles “useful,” expects journals themselves may have to work out the specifics.

    • * Ensuring the Integrity, Accessibility, and Stewardship of Research Data in the Digital Age, The National Academies Press

  11. International Treaties

    Test Ban Monitoring: No Place to Hide

    1. Daniel Clery

    A nearly complete global network can reliably spot secret nuclear explosions, researchers say. Will holdout nations now ratify the test ban?

    Network's four senses.

    Clockwise from top left: an auxiliary seismic station at Kerman, Iran; laying cable to an offshore hydroacoustic station, Juan Fernández Islands, Chile; radionuclide station, Resolute, northern Canada; infrasound detectors, Tristan da Cunha, southern Atlantic Ocean.


    VIENNA—On 9 October 2006, the collection of nations known to possess nuclear weapons gained a new member: North Korea. The country had tested a small nuclear device, equivalent to less than 1000 tons, or 1 kiloton (kT), of conventional explosives. It highlighted a vexing question for those trying to curb the spread of nuclear weapons: If the world community outlaws nuclear bomb tests, can it set up an effective system to police that ban?

    A network of sensors designed to do exactly this—by detecting nuclear blasts anywhere on Earth—was taking shape when the North Koreans ran their test. It was being built under the auspices of the Comprehensive Nuclear-Test-Ban Treaty (CTBT), an international agreement that grew out of the Cold War and was codified in the 1990s. A decade later, it is still waiting to come fully into force.

    CTBT is unique among nuclear treaties in that it includes a worldwide verification system financed by the signatories, an array of 337 sensor stations and laboratories scattered across 89 countries. The stations pick up seismic tremors in the earth, listen for acoustic signals in the air and oceans, and sniff out radionuclides carried on the wind. Although this monitoring network was only 60% complete in 2006, more than 20 of CTBT's seismic stations—one as far away as South America—immediately picked up signals from the North Korean test. In less than 2 hours, it delivered an automatic analysis of the blast's estimated time, location, and magnitude to states that had signed the treaty.

    The network performed again in May of this year when North Korea carried out a second test. This time, the bomb was detected by 61 seismic stations, reflecting its larger blast and the network's growth. Within 48 hours, CTBT experts had narrowed down the location of the explosion to an area in North Korea just 10 kilometers across. “There could not be a clearer indicator of the need for these verification tools,” say Tibor Tóth, head of the CTBT Organization (CTBTO).

    Yet the treaty remains hamstrung: Although 181 nations have signed CTBT and 148 ratified it, the treaty does not ban tests, nor can it use its full arsenal of verification measures until all 44 states with nuclear weapons or reactors have ratified it. Nine have yet to do so, including the United States. In 1999, the U.S. Senate debated and rejected CTBT. Critics argued that the verification system would not be sensitive enough to detect cheating and, with a little ingenuity, a purposeful nation could conceal a test. But during the 2008 presidential election campaign, candidate Barack Obama said he would tackle this issue: “As president, I will reach out to the Senate to secure the ratification of CTBT at the earliest practical date.”

    Much has changed since the Senate rejected the treaty a decade ago. CTBT's monitoring system, barely begun in 1999, is now nearly 75% complete. And it has been scrutinized over the past year by hundreds of outside researchers. CTBT managers offered them access to data from the network so that they could test its readiness, probe its strengths and weaknesses, and suggest improvements. “This was potentially dangerous. They could have spotted flaws,” says planetary scientist Raymond Jeanloz of the University of California, Berkeley.


    CTBT's monitoring system aims to detect clandestine nuclear explosions anywhere in the world with 337 stations using four different sensing techniques.


    Last month, those researchers were invited to Vienna for the first conference of this International Scientific Studies (ISS) program. The reviews were mostly positive: Many lauded CTBT's International Monitoring System (IMS) for the quality and global coverage of its data, declaring that in many parts of the world the network is more sensitive than the treaty requires. “[The ISS] was an amazingly successful process,” Jeanloz says. Now supporters of CTBT are returning to the political challenge, trying to persuade holdout nations to come on board. “If we don't make this step, the whole thing is vulnerable to falling apart,” says David Hafemeister of the Center for International Security and Cooperation at Stanford University in Palo Alto, California.

    An ear to the ground

    Because the best way to hide a nuclear test is to carry it out underground, the backbone of IMS is an array of 170 seismic stations: 50 primary stations that are online around the clock and 120 auxiliary stations that send data on request.

    The treaty bans all nuclear explosions, no matter how small, but for practical reasons IMS is designed to detect an explosion equivalent to 1 kT or bigger. This corresponds to a seismic shock of roughly magnitude 3.5. (The bomb that was dropped on Hiroshima was about 15 kT.) This cutoff was chosen in part to reduce noise in the system, because hundreds of earthquakes and mining explosions every day produce shocks below this level. A threshold lower than 1 kT would overwhelm the CTBT's International Data Center (IDC) in Vienna with events requiring analysis for signs of a nuclear test. And weapons experts have said that blasts smaller than 1 kT would not be militarily useful to a fledgling nuclear power.

    Seismic events produce a variety of waves that can be picked up by monitoring stations. Body waves, which travel deep through the earth, come in both P-waves, which oscillate in the direction of travel, and S-waves, which vibrate crosswise. Surface waves are slower, but they do the damage following earthquakes. Researchers have learned to identify which direction body waves are coming from, and the differences in arrival times reveal the distance of the source. Surface waves give information on the depth of the event and its magnitude. The tricky part of nuclear weapons verification is distinguishing nuclear tests from the natural and innocent manmade events.

    “Different events produce different wave types in different proportions,” says seismologist Paul Richards of the Lamont-Doherty Earth Observatory of Columbia University in Palisades, New York. However, a mine collapse in Germany in 1989 and two others in 1995 in Russia and the United States cast doubt on the precision of seismic fingerprinting because all three looked disturbingly like nuclear tests to seismologists. After much analysis, researchers found an unmistakable difference: The signal from a mine collapse starts with a trough as the ground moves inward, whereas an explosion always starts with a peak.

    This knowledge gave seismologists confidence that IMS would be able to spot nuclear explosions. The nascent system was put to the test in August 1997 when stations detected an event of magnitude 3.3 close to Novaya Zemlya, the Soviet Union's old Arctic test site. It seemed as though Russia might be testing the abilities of CTBT. Analysis of the signal soon showed the source to be offshore beneath the Kara Sea, but the surface waves were too weak to be of any use in distinguishing between a blast and a quake. Researchers used a new technique, which looks at “regional” seismic waves that have traveled less than 1000 kilometers through the crust and upper mantle. By comparing the magnitudes of the P-waves and S-waves, they were able to confirm that the event was an earthquake.

    Researchers at the ISS meeting pointed out that in certain areas, such as around former nuclear test sites, stations have been sited carefully to ensure that monitoring is more sensitive than required by the treaty. At Novaya Zemlya, a test of 0.1 kT would probably not go undetected. This resolution is something “we dreamed of” 15 years ago, says Yves Caristan, director of the Saclay Institute of Nuclear Studies of France's Atomic Energy Commission. “There has been significant improvement over the past few years, particularly in the use of regional seismic data,” says Jay Zucca of the Lawrence Livermore National Laboratory (LLNL) in California.

    Some researchers said detection could be improved with more stations around the Southern Ocean and in India and Pakistan. (Neither has signed the treaty.) And some called for boosting the network further by using auxiliary stations more regularly and gathering data from the many thousands of civilian seismic stations. Gideon Frank, vice chair of the Israel Atomic Energy Commission, made a pitch for basic research: “Noise is still a problem. We need to develop a much deeper understanding of natural events to improve screening,” he says.

    Watching the waves

    IMS also has a smaller network of sensors to listen for explosions carried out underwater, just above the ocean surface, or below the ocean bottom. Sound travels very efficiently in water: Kiyoshi Suyehiro of the Institute for Research on Earth Evolution in Kanagawa, Japan, says a 20-kilogram test charge of TNT was detected 16,000 kilometers away. As a result, IMS has just 11 hydroacoustic stations positioned around the vast expanses of the Atlantic, Pacific, and Indian Oceans in the Southern Hemisphere. Sound propagates in deep water especially well at a depth of about 1000 meters in what is known as the sound fixing and ranging (SOFAR) channel. Six of the IMS's hydroacoustic stations consist of SOFAR hydrophones in deep water as much as 100 kilometers from the coast of remote islands. Each is a set of three hydrophones anchored to the seabed and connected to shore-based stations by cable.

    In addition to hydrophones, IMS has five stations to detect T-phase waves, the third or tertiary waves, after P and S, from a seismic tremor. These seismic stations pick up waves that have traveled into the ocean as hydroacoustic waves and then back into the ground. To get the best results, such stations need to be sited near coasts or on small islands. Researchers here were enthusiastic about the scientific spinoff of the hydroacoustic network because nothing on this scale has been available before. “The data from this network are unprecedented. Many applications … we haven't even touched yet,” says Harry Miley of the Pacific Northwest National Laboratory in Richland, Washington.

    Blowing in the wind.

    This atmospheric transport model shows how radionuclides spread from North Korea's 2006 nuclear test (bottom left) to the RN16 monitoring station in Yellowknife, Canada (top right).


    To listen for test explosions in the atmosphere, IMS has revived the almost-forgotten science of infrasound. Very low frequency sound, or infrasound, was first detected in 1883 following the eruption of Krakatoa in Indonesia. Infrasound waves shattered windows hundreds of kilometers away and circled the globe seven times, leaving a trace on microbarometers far from the volcano. There was much scientific interest in infrasound during the era of atmospheric testing, but it died away as atmospheric testing was curtailed. CTBT has caused a “renaissance” in infrasound research, says Elisabeth Blanc of France's Atomic Energy Commission.

    An infrasound detector consists of two dozen pipes, each about 3 meters long, radiating from a central chamber. Arriving infrasound waves travel down the pipes; the minute changes in pressure they cause in the chamber are registered by a microbarometer. The pipe arrangement serves to reduce noise from wind, and at windy sites a number of detectors are combined to further improve the signal-to-noise ratio. Thirty-nine of a planned 60 infrasound stations are now operating and appear to be working well. The 2004 eruption of Manam in Papua New Guinea was detected by 13 IMS stations, some more than 1000 kilometers away. In 2005, five IMS stations, up to 1500 kilometers distant, picked up the Buncefield oil depot explosion north of London. In 2008, a bolide that exploded over Oregon and the explosion of an ex-military ammunition depot in Gërdec, Albania, were both detected. “This is far larger and more sensitive than any previous network,” says Alexis Le Pichon of France's Atomic Energy Commission. Although challenges remain, he says, including dealing with wind noise and seasonal changes to propagation, “the 1-kT goal [for IMS] is fully achieved.”

    Irrefutable evidence

    No matter how well CTBT seismic and sound networks function, there's no “smoking gun” evidence of the nuclear nature of an explosion unless radionuclides are detected. For that reason, IMS's fourth and final pillar is dedicated to radionuclide monitoring. Although this system can confirm that an explosion was nuclear, it has a disadvantage: It can be days or weeks after an event before any data are recorded. Fifty-six out of a total of 80 radionuclide stations are already operational. All will have particle detectors, essentially air filters that gather specks of dust carried on the wind and which are removed and checked daily in a gamma-ray detector for radioactive isotopes.

    Half the stations will also be equipped to detect radioactive noble gases, including xenon. Noble gases do not react with anything and so are more likely to escape through cracks in the rock into the air following an underground test. Some xenon isotopes have half-lives measured in hours or days; detecting them suggests that an event happened recently. “The network can detect tests well below 1 kT anywhere in the world,” says Wolfgang Weiss of Germany's Federal Office for Radiation Protection.

    To compensate for the time lag between the release of radionuclides from a detonation and their detection, IDC uses sophisticated atmospheric transport models that can reconstruct the likely travel paths particles might have followed. The computers are constantly fed weather information from the World Meteorological Organization (WMO). IDC staff can run these models backward to see where detected radionuclides might have come from and whether this corresponds with a suspected test. “The technology has come a long way in the past 10 years, especially with the use of satellite data,” says WMO's Peter Chen.

    The radionuclide network worked perfectly following the 2006 North Korean test. Twelve days after the seismic signal was recorded, xenon was detected at a station at Yellowknife in Northern Canada, and transport models showed it could have come from Korea. This success was all the more reason for surprise when, after this year's North Korean test, no radionuclides were detected. “The sensors were working fine,” says Anders Ringbom of the Swedish Defence Research Agency. “If there was a prompt release, … we would have seen it.”

    This unexpected blank has renewed debate about evasion scenarios: methods by which a potential cheat could try to fool IMS into not recognizing a test. Decoupling, for example, is a way of reducing the seismic signal of an underground explosion by carrying it out in the center of a large cavity. This is even more effective if the surrounding rock is soft. The only successful fully decoupled test was carried out by the United States in 1966: a 0.38-kT blast in a 34-meter-wide salt cavity. But such small tests are not militarily useful. Researchers estimate that to fully decouple a 5-kT explosion would require a cavity 85 meters across, almost large enough to accommodate the Statue of Liberty with its pedestal. “It would require an engineering effort akin to building the World Trade Center. … It's possible but not really practical,” says Andreas Persbo of the Verification Research, Training and Information Centre in London.

    Ground truth.

    Inspectors collect air samples to test for radioactive argon during a 2008 full-scale field trial of an on-site inspection in Kazakhstan.


    Similarly, it is possible to contain radionuclides by exploding a bomb of the right size in a rock that will melt to form a glassified cage impervious to gases. Such evasion takes skill and considerable experience with nuclear weapons, neither of which experts believe North Korea yet has. Many at the ISS meeting think that on this occasion the North Koreans just got lucky. “Maybe it was well-contained purely by chance. It is clear from the seismology that it was a huge explosion,” says Lars Ceranna of Germany's Federal Institute for Geosciences and Natural Resources.

    The puzzle created by the North Korean blast this year, CTBT proponents say, highlights the importance of forging ahead and ratifying the treaty. Once it is fully in force, its Executive Council, if faced with a suspected test like the one in May, can call for CTBT's ultimate verification measure: an on-site inspection. Within days of a suspected test, a team of up to 40 people can be on the scene and scouring an area up to 1000 square kilometers using overflights, mobile radionuclide detectors, microseismic arrays to detect aftershocks, gamma-ray detectors, ground-penetrating radar, magnetic and gravitational field mapping, and electrical conductivity measurements. CTBTO held a full-scale dress rehearsal of an on-site inspection last year in Kazakhstan. “Recent events in Korea point up the importance of on-site inspection,” says LLNL's Zucca. “There was obviously an explosion; there was no measured xenon. On-site inspection can resolve these questions.”

    CTBT officials were pleased by the positive report card from the researchers at the ISS meeting. Their approval may even help persuade some of the holdout governments to reconsider their position on the treaty. But in the end, the decision on whether to put the treaty to use will not be a technical one. “How good is good enough?” asks Jenifer Mackby of the Center for Strategic and International Studies in Washington, D.C.: “That's for the politicians to decide.”

  12. International Treaties

    Comprehensive Test Ban: The Long Road

    1. Daniel Clery

    Science gives a timeline of the steps that have been taken over the past half-century leading to the current effort for a comprehensive ban on nuclear weapons testing.

    In the early 1950s as the extent of radioactive fallout from atmospheric blasts became widely known, the calls grew louder for a ban on nuclear weapons testing. Steps over the next half-century led to the current effort for a comprehensive ban.

    1963 The Partial Test Ban Treaty forbade nuclear tests in the atmosphere, oceans, and space. The treaty did not have its own verification measures but relied on monitoring by signatory states.

    1976 An ad hoc Group of Scientific Experts (GSE) began assessing whether it was possible to police a complete testing ban. GSE was made permanent by the United Nations Conference on Disarmament in 1982. Although there was little diplomatic movement during the Cold War years, GSE honed its skills at detecting nuclear explosions.

    1991 Countries that had signed the partial test ban agreed to work toward a total ban; formal negotiations began in 1993.

    1996 After 3 years of intense bargaining, the Comprehensive Nuclear-Test-Ban Treaty (CTBT) was opened for signature in September at the United Nations headquarters in New York City; it was signed that day by all five nuclear weapon states and 66 others. To date, 181 countries have signed, leaving only 14 outside of it. The treaty also requires ratification, a political process—usually a parliamentary vote—that legally binds the country to abide by CTBT. Some 148 have ratified it so far. But the negotiators set a high bar for the treaty to come into full force: They compiled a list of all countries that in 1996 had either nuclear weapons or reactors for power or research and required that all 44 must ratify. Nine have yet to do so—China, North Korea, Egypt, India, Indonesia, Iran, Israel, Pakistan, and the United States.

    1999 The U.S. Senate failed to ratify CTBT by a vote of 51–48 in October.

    2008 Candidate Barack Obama said: “As president, I will reach out to the Senate to secure the ratification of the CTBT at the earliest practical date.” To be approved, the treaty must be endorsed by two-thirds of the Senate.

  13. Ecology

    Deadly Flights

    1. Andrew Curry*

    Massive wind turbines seem to be killing more and more migratory bats, prompting research into these neglected creatures and efforts to minimize the toll.


    Migrating bats have proven tough to study.


    On a warm, late-April night, Volker Kelm drives his battered station wagon across a bleak expanse of scrubby fields a few kilometers from the German border with Poland. The site is a brown coal strip mine owned by the German energy giant Vattenfall; the only other vehicles on the access roads are massive earthmovers. The Berlin-based environmental consultant parks on a dirt access road and pulls out what looks a bit like an old-fashioned tape recorder but is actually a monitor capable of picking up ultrasonic calls from bats ranging up to 120 kilohertz.

    Tuning the detector to the 32-kilohertz range favored by Nyctalus noctula, one of Europe's most common bat species, Kelm starts scanning the sky. At 20:59, just as the day's last light dwindles away, the bat detector begins to chirp. Kelm watches as a trio of Nyctalus bats—known as the common noctule all across Europe—makes their way across the field in the direction of a long line of blinking red lights on the horizon: dozens of wind energy turbines, a night's flight away.

    With 40-meter blades whose tips move at up 360 kilometers per hour, wind turbines are what bring Kelm out tonight to spy on bats. Vattenfall wants to build a new wind farm on a tapped-out part of the mining site, but the turbines have a reputation for killing bats and European regulations compel the company to hire someone to conduct an environmental assessment. At a glance, the mine-scarred ground seems an unlikely home to bats. “For bats, the countryside here is not that interesting,” Kelm says. “There are no meadows, no rivers—it's really a desert here.”

    But scientists have learned that even if there are no bat roosts nearby, wind turbines can pose a threat. Like many birds, some bat species are migratory, moving long distances on a seasonal basis, usually from summer feeding grounds to winter hibernation sites. The common noctule, for example, lives in forests during the warm summer and fall months, but may travel hundreds of kilometers to caves to hibernate. Fortunately for Vattenfall, the three Nyctalus bats spotted by Kelm are the only migratory species he will see all night, despite also using night-vision goggles.

    Kelm's surveys won't always bring such good news. With wind power booming around the world—in Germany alone, nearly 20,000 wind-energy installations have been built since 1990—researchers are seeing a marked increase in dead bats. The turbines simply rotate their blades too quickly for the winged mammals to avoid. “There's nothing that fast in nature,” Kelm says. “They're using sonar to look for butterflies and insects, not windmill blades.”

    The deaths have led to a flurry of research on migratory bats and their behavior. “The problem with bats and wind energy has pushed a lot of work that wouldn't have occurred otherwise,” says Edward Arnett of the Austin, Texas–based nonprofit Bat Conservation International. Indeed, at a January conference in Berlin on migratory bats, wind farms were a dominant theme. Scientists are racing to figure out what brings the bats in contact with wind turbines, and what can be done to save them.

    There are no easy answers, in part because little is known about migratory bats. “There are huge questions: ‘How big are the populations? Where do they migrate? Are they really being killed by wind farms?’ Sometimes it looks like that, but we don't know,” says Martin Wikelski, an expert on bat migration at the Max Planck Institute for Ornithology in Radolfzell, Germany. And without concrete data, persuading government regulators and energy companies to relocate proposed wind farms, let alone change the operations of existing turbines or shut them down, is difficult.

    Bat signals

    The first hints of a problem were reported in Europe almost a decade ago, when ornithologists and environmental consultants looking for evidence of bird collisions at wind-energy sites began noticing dead bats on the ground. The bat fatalities became particularly apparent in Germany, which has the most wind-energy facilities in the world and is a central location in Europe that makes it a crossroads for migratory birds and bats.

    Still, getting definitive death counts wasn't easy. Bats are small, light, and hard to spot without experience and training. “They look like little rocks on the ground. They sort of blend in because of their size and color,” says Boston University bat biologist Thomas Kunz. By the time the sun comes up on a field full of bat casualties, scavengers such as foxes and coyotes may have made off with many of the carcasses, skewing the true number of bat kills even further.

    In the fall of 2003, consultants looking for bird kills at Mountaineer, a wind-energy facility located atop a wooded, West Virginia mountain ridge, found large numbers of eastern red bats and hoary bats dead near the turbine blades. “There were a few hundred bat fatalities, unlike anything in the literature,” says Paul Cryan, a U.S. Geological Survey research biologist based in Fort Collins, Colorado. Adjusting for the limitations of ground surveys, researchers estimated that the Mountaineer windmills alone were killing between 1400 and 4000 bats a year (Science, 9 April 2004, p. 203). Suddenly, the problem was no longer just a European one.

    As dead bats were tallied on both sides of the Atlantic, one thing rapidly became clear: Wind-energy plants aren't equal-opportunity killers. “Migratory bats are the ones that die in wind parks, not the locals,” Kelm says. Indeed, in Germany, 90% of bat fatalities at wind farms are from just five species, all migratory. North American researchers found similar patterns.

    Unfortunately, migratory bats are notoriously uncooperative research subjects: They're nocturnal, solitary, and too small to be tracked with current GPS or radio transmitter tags, which at a minimum of 16 grams weigh almost as much as the average 20-gram bat. The traditional method to study these bats has been banding—attaching lightweight metal tags to a bat's forearm—but that relies on someone finding and reporting the tag, either by catching the bat somewhere along its migration route or finding a dead one. Still, thanks to banding programs that date back almost a century, German researchers have a general idea of European bat migration. It tends to be latitudinal, as opposed to the north-south migrations of birds, for example, but the data are frustratingly vague.

    U.S. bat scientists have had it even worse. Because of concerns among biologists over disturbing bats during their hibernation period, there has been a moratorium on bat banding in North America for decades. As a result, there's almost no information on where migratory bats in the United States come from. “We don't really know where they are for much of the year,” says Cryan. “They're not an easy animal to follow.”

    Lethal blades.

    Hoary bats (Lasiurus cinereus) are among the bats most often found dead (inset) near wind turbines in North America.


    With no way to document the migration corridors the animals use, researchers have turned to what biologist Erin Baerwald of the University of Calgary in Canada calls “destructive sampling,” examining the corpses of dead bats found below turbines for clues to their diet during migration, brain development, genetics, age, and cause of death. Dissecting these bats has yielded some surprises. A study out last summer showed that many are killed not by collisions with turbine blades, but by barotrauma, fatal bubbles or ruptures in bats' lungs and hearts caused by the low-pressure zones the massive blades create in their wake.

    Biologists are also deploying a rapidly advancing array of technological tools to study how bats interact with the blades and towers, some of which now reach as high as 150 meters. Researchers can now install “batcorders” inside turbines that are capable of recording hundreds of hours of bat calls. Other techniques include thermal imaging to watch bat behavior around turbines and night-vision goggles to spot their flight paths.

    The data provided by these gadgets are yielding new insights into migratory bat behavior and clues as to how to reduce the impact of wind farms. Christian Voight, a behavioral physiologist at the Leibniz Institute for Zoo and Wildlife Research in Berlin, notes that during their long-distance travels and hunting forays, migratory bats don't stick as close to the ground as locals do, and that may put them on a collision course with the blades of windmills.

    There's even a possibility the turbines actively attract bats. Researchers have observed tree bats flocking around tall trees during mating season, perhaps using them as rallying points for mating and roosting. As the most prominent object on the horizon, windmills may look like particularly tall trees to migrating bats crossing unfamiliar landscapes. That has led to concerns that preconstruction surveys like Kelm's are not that useful, because bat activity could increase after a turbine is built.

    Minimizing the danger

    Amid all the uncertainties, scientists are gathering potentially useful data. The activity of migratory bats spikes in the late summer and early fall, when the animals have given birth to their young and are actively hunting to store up energy for winter hibernation. “During the fall migratory period, there's twice as many bats and birds out there as in the spring,” Kunz says.

    And bat recorders placed on and around windmills detect the highest levels of bat calls on warm, dry nights when wind speeds are below 6 meters per second—ideal conditions for insects, and for the bats that feed on them. Drawing on such data, Arnett and bat biologist Oliver Behr of the University of Erlangen-Nürnberg in Germany have set up separate experiments in the United States and Germany to see if feathering windmills—shifting the angle of the blade so that it's parallel to the wind—on nights when bats are most likely to be active reduces bat fatalities. Arnett, for example, worked with Portland, Oregon–based Iberdrola Renewables to randomly feather windmills at a Pennsylvania facility over the course of 3 months last year. The results were encouraging: Daily ground surveys during the study found between 53% and 87% fewer bat fatalities at the inactive windmills.

    And because bats tend to be most active on nights with low wind speeds, Arnett says the financial impact of “operational mitigation” would be minimal. “The projected loss to the facility was 0.3% to 1% of annual production,” Arnett says. “We think it's worth it to reduce the fatalities.” This summer, Arnett is continuing the study and also testing acoustic devices that might drive bats away from windmills by interfering with their sonar abilities, much as marine biologists have similarly warned dolphins away from fishing nets.

    For his part, Kelm will be back at the German strip mine every 10 days for the next 5 months, listening for the calls of the common noctule. Ironically, the very wind turbines that threaten these creatures of the night have helped scientists understand them better. “In the last 15 years, people have started really talking about bats, really brought them out of the darkness and into the light, so to speak,” Kelm says. “Wind energy provoked research and has provided results.”

    • * Andrew Curry is a freelance writer based in Berlin.

  14. Microbiology

    Phytoplasma Research Begins to Bloom

    1. Evelyn Strauss

    Spread by insects, bacteria called phytoplasmas co-opt plant development, sometimes creating beauty but more often bringing devastation.

    Tainted beauty.

    Commercial poinsettias owe their lushness to a bacterial infection.


    Ancient Chinese rulers and lovers of Christmas decorations have been unwitting fans of an obscure group of bacteria called phytoplasmas. When these microbes infect peonies, the plants'flowers can emerge not in the typical red or yellow colors, but in a delicate green. This hue was considered so attractive and valuable about 1000 years ago in China that the Song Dynasty's imperial court received a special annual tribute composed of the blossoms. More recently, phytoplasmas have helped brighten winter holidays by transforming otherwise gangly poinsettias, with their eye-catching red leaves, into bushy ornamentals.

    But most effects of these microbes on plants are far from pretty. They shrivel grapes in Europe and Australia; stunt corn growth in South America; destroy pears and apples in the United States and Europe; ruin peanuts, sesame, and soybean in Asia; and sicken elms, coconuts, asters, and hydrangeas on multiple continents. Just one 2001 phytoplasma outbreak in apple trees caused a loss of about €25 million in Germany and about €100 million in Italy. Epidemics among coconut palms have destroyed the livelihoods of many people in Africa and the Caribbean who depend on the trees for nourishment, building materials, and income. And as the world warms up, these attacks on food crops, lumber and shade trees, and ornamental flowers will likely grow, in part because the insects that transmit the bacteria are expected to expand their ranges north and south.

    For all the destruction that phytoplasmas inflict, one might expect that dozens of agricultural companies and academic labs have generated abundant amounts of information about them. But study of these plant pathogens got off to a slow start. For almost half a century, plant pathologists thought phytoplasmas were viruses. To this day, the inability to grow these bacteria outside plants or insects hinders efforts to get a handle on their biology and genomes. However, the knowledge scientists have managed to glean has nurtured their appreciation for the diverse talents that these bacteria demonstrate.

    Indeed, whereas phytoplasmas are more petite and possess smaller genomes than most bacteria, they manage a complex life cycle that involves two distinct environments: the sugar-conducting tissue—or phloem—of plants and various organs of the sap-sucking insects that spread them. It's “pretty amazing” that “they seem to do so well in both situations,” says Amy Charkowski, a plant pathologist at the University of Wisconsin, Madison.

    In 2004, scientists published the first full phytoplasma genomic sequence and, since then, they've completed three additional ones. For the small band of biologists studying these microbes, sequencing the genomes is “the most exciting thing that's happened in the last decade,” says Assunta Bertaccini, a plant pathologist at the University of Bologna in Italy. With that information, researchers have begun to elucidate how phytoplasma proteins manipulate plant physiology and insect behavior, findings that might inspire novel measures to stem the devastating agricultural infections around the world.

    Viral blinders

    In the 1920s, researchers began to go astray when they tried to pin down the cause of aster yellows, a disease that destroys crops, orchards, and ornamental plants. Scientists established that insects spread this malady, an observation that fostered the notion that it was yet another viral blight transmitted by the creatures. Indeed, some symptoms of the disease, including the yellowing that begat its name, resembled those of known viral infections. For those reasons and more, scientists spent the next 40 years convinced that a virus, or viruses, causes yellows-type diseases and additional insect-transmitted scourges where there was no standard evidence of bacteria or fungi. “The most prominent people knew it was a virus, so we looked for a virus,” recalls virologist Karl Maramorosch of Rutgers University in New Brunswick, New Jersey. “But we didn't find it.”

    Scientists missed clues about the true culprit, says Maramorosch. In 1957, he noticed that insects injected with both the antibiotic tetracycline and material containing the infectious agent did not subsequently give aster yellows to plants. But Maramorosch knew that tetracyclines have no effect on viruses, so he concluded at the time that unusually high temperatures in the greenhouse—rather than the drug—prevented pathogen transmission. “I missed the boat by … years because I didn't repeat that experiment,” Maramorosch says.

    Finally, in 1967, Yoji Doi and colleagues at the University of Tokyo used electron microscopy to discover structures that resembled bacteria called mycoplasmas in plant tissue afflicted with presumptive viral infections. Like mycoplasmas—which cause respiratory ailments and other diseases in people and animals—the plant microbes were unusually small and lacked rigid cell walls. Additional tests established that the pathogens succumbed to antibiotics.

    Doi's findings stimulated a flurry of research that linked these mycoplasma-like organisms to many other “atypical virus” plant diseases. In the 1980s, scientists began isolating pieces of bacterial DNA from afflicted plants. They developed diagnostic techniques and categorized the microbes based on shared sequences and other DNA patterns. The plant bacteria proved to be only distantly related to mycoplasmas and, in 1994, researchers coined the name phytoplasma.

    Unlike most bacteria that sicken plants, phytoplasmas reside within cells rather than in the surrounding space. They duplicate only inside plants or insects, a property that makes them hard to study. The inability to grow pure cultures in laboratory broth renders even some basic experiments impossible. For example, scientists can't purify a suspect phytoplasma and inoculate healthy plants to test whether it causes a given disease. Even sequencing their genomes proved challenging because the bacterial DNA must be separated from that of the plants or insects they infect.

    Sowing host changes

    Phytoplasmas trigger symptoms that suggest they interfere with normal plant development. The bacteria tend to promote vegetative growth—producing plants with extra leaves, shoots, and branches—and to quash reproductive activities such as flower formation. The microbes also promote dwarf ism, turn normally nongreen parts green, elicit general yellowing, and cause “witches' broom,” a condition in which branches cluster and grow dense.

    With the genomes in hand, researchers have begun to look for proteins that enable phytoplasmas to hijack plant processes. Using computer programs, they have pinned down genes for proteins that the bacteria likely secrete into the cells they infect.

    Earlier this year, molecular biologist Saskia Hogenhout of the John Innes Centre in Norwich, U.K., identified such a gene from Aster Yellows- Witches'Broom (AY-WB) phytoplasma. The gene contains a “nuclear localization” sequence, which suggests that the protein it encodes might be able to access the host cell's DNA. This observation intrigued Hogenhout because it raised the possibility that the protein, secreted AY-WB protein 11 (SAP11), alters plant-cell gene activity. The sap11 gene turns on in AY-WB-infected plants and, as predicted by the presence of the localization sequence, its protein accumulates in plant cell nuclei, Hogenhout's team reported in the January issue of Molecular Plant-Microbe Interactions.

    Like other phytoplasmas, AY-WB is found only in the phloem, yet the researchers documented SAP11 in other tissues, suggesting that the protein could exert widespread effects on plants. Indeed, Hogenhout says that when her team engineered Arabidopsis thaliana to manufacture SAP11 in almost all of its cells, the mustard plant displayed many signs of AY-WB infection, providing additional evidence of the protein's role in disease. SAP11 production or AY-WB infection modifies the activity of plant genes that participate in development, she adds, “so that's satisfying.”

    Shigetou Namba, a plant pathologist at the University of Tokyo, and colleagues have uncovered a different virulence protein, this one from the OY phytoplasma, which causes a disease called Tengu-su (Tengu's nest). Affected plants develop small, short branches that look like the nest of Tengu, a supernatural creature from Japanese folklore.

    The Tokyo team began its study by identifying 30 proteins that are likely to be secreted from OY phytoplasma, based on DNA sequences. They next put the gene for each of these proteins into an herb called Nicotiana benthamiana, one per plant. Twenty-nine of the plants looked normal, but the 30th developed witches' broom and dwarf ism, the researchers reported in the 14 April issue of the Proceedings of the National Academy of Sciences. They called that symptom-causing bacterial protein TENGU. Like SAP11, TENGU appears outside the phloem in infected plants. “The molecules can reach farther than the bacterium,” says Namba. “These kinds of small proteins might [aid the] survival of microorganisms that are restricted to phloem tissues.”

    Twisted magic.

    Unlike a normal plant (left), those with a OY phytoplasma infection (middle) or engineered to make the protein TENGU (right) develop witches' broom.


    Studies of A. thaliana engineered to contain the tengu gene revealed that the activity of more than 900 genes changed in response to the protein. Among the many genes whose activity levels dropped, at least a dozen contribute to signaling by the hormone auxin, which normally ensures that plants grow from their upper tips. Suppression of those genes might explain the witches' broom, Namba suggests.

    The possibility that SAP11 and TENGU govern plant-gene activity challenges the prevailing idea in the field that phytoplasmas cause symptoms by depleting nutrients or growth regulators, says Nigel Harrison, a plant pathologist at the Fort Lauderdale Research and Education Center of the University of Florida. The new work “implies that phytoplasma parasitism does not just involve eating the sap,” he explains. “The bacteria are influencing plant development, maybe to produce more of what they want, by regulating the plant genome.”

    Who knows what phytoplasmas “want,” but the myriad ways they modify plants increase the longevity and breeding success of some insects that the bacteria use for transmission. The mechanism of this phenomenon is currently unknown, but researchers note that the additional vegetative tissues induced by infection expand the territory where insects can feed and lay eggs. Hogenhout's preliminary work with the leafhopper Macrosteles quadrilineatus shows that feeding it plants engineered to produce SAP11 increases the insect's reproductive output. “That's really cool because it means that phytoplasma molecules exert control beyond their direct interactions with plants,” says Hogenhout. “Phytoplasmas that can generate more insect vectors are more likely to survive and spread.”

    The bacteria can even broaden an insect's host range, somehow making normally inhospitable plants more welcoming. Decades ago, Maramorosch noticed that aster yellows disease lengthens the normally short survival time of Dalbulus maidis on asters.

    What's more, phytoplasmas lure to plants the very insects needed to spread the microbes. The phytoplasma Candidatus P. mali causes apple proliferation disease, a condition characterized by witches' broom and small, tasteless fruit. The microbe prods apple trees to produce a substance that attracts a sap-sucking insect called Cacopsylla picta, chemical ecologist Jürgen Gross of the Federal Research Centre for Cultivated Plants in Dossenheim, Germany, and colleagues reported in the August 2008 issue of the Journal of Chemical Ecology. The phytoplasma-induced apple-tree compound is a chemical called β-caryophyllene that preferentially attracts C. picta over other insects, the researchers subsequently showed. Gross's team has now exploited this preference, creating an insect trap that might help monitor or capture C. picta in the field.

    Analyses of the phytoplasma genome sequences hint that researchers still have a lot to learn about these formidable microbes. Three of the four genomes contain large amounts of repeated DNA sequence, and even the fourth one carries multiple copies of almost 100 genes, a noteworthy observation, given that phytoplasmas have unusually small genomes, even for bacteria. The repeats might allow phytoplasmas to “try something new” by adapting genes and other DNA to new functions without losing essential capabilities, speculates Robert Davis, a plant pathologist at the Agricultural Research Service at the U.S. Department of Agriculture in Beltsville, Maryland.

    The phytoplasmas with sequenced genomes represent only a few twigs on the genus's evolutionary tree—and this collection does not include any from the branch that contains the majority of Candidatus phytoplasma species, which cause significant diseases and carry some of the smallest phytoplasma genomes. Screening healthy plants might reveal additional phytoplasmas, including ones that live peacefully within these hosts, say researchers. Harvesting genome sequences from this group of organisms could provide valuable information for understanding symbiosis and fuel ongoing studies of evolution as well as pathogenesis.

    Fighting back

    Given the damage phytoplasmas wreak, researchers want to develop ways to keep the microbes in check, but they need more basic knowledge to curb transmission, says Phyllis Weintraub, an entomologist at the Agricultural Research Organization of Gilat Research Center in Israel. Most control tactics depend on identifying the insects that carry a given pathogen, so researchers have information about how the vectors reproduce and migrate, how far they can fly, and what else they feed on. However, “there are a great number of [phytoplasma] diseases for which the vector is unknown,” Weintraub says.

    In addition, the complex ecology of phytoplasmas and their hosts plagues plant pathologists. A given phytoplasma species can infect a few or many kinds of plants and cause a wide range of symptoms. Furthermore, different phytoplasmas can cause nearly identical-looking plant diseases, and a single insect can carry multiple varieties of the microbes.

    Hassling plants, helping insects.

    Phytoplasmas trouble grapevines (top left), raspberries (top right), and coneflowers (bottom left) but can boost reproductive prowess of leafhoppers (bottom right) that spread the bacteria.


    In some crops, the bacteria are not abundant enough to spot, nor are they uniformly distributed in the phloem. “If you don't detect something, you don't know if you've just sampled the wrong bit of plant, or [sampled] at the wrong time, or if it's really not there,” says Matthew Dickinson, a plant pathologist at the University of Nottingham in the United Kingdom.

    Even with the right knowledge in hand, horticulturists and others find that traditional control measures—pesticides, culling infected plants, or destroying weeds and other plants that can serve as a reservoir for phytoplasmas—are only moderately effective. All too often, “an insect will get through” and the bacteria will spread, says Dickinson. Injecting tetracycline into trunks can limit infections in trees, but that's a pricey strategy and doesn't always work. Furthermore, using this drug in agriculture worries many human-disease experts because the practice could exacerbate clinical antibiotic resistance.

    Some researchers are trying to find or create plants that fend off a phytoplasma attack or remain healthy while carrying the microbes. Dickinson is assessing whether some coconut palms harbor the bacteria without succumbing to disease, for example. ”We've got some preliminary evidence” that such resistant trees exist, he says.

    Other scientists are considering making plants distasteful to a phytoplasma-ferrying insect by engineering compounds into them that deter feeding. Such a scheme has shown promise in a leafhopper-transmitted disease in rice, Harrison and Weintraub say.

    Infecting plants with mild forms of phytoplasmas also holds potential. Evidence suggests that such strains can guard against damage from those that produce more severe symptoms. Alternatively, scientists might employ other kinds of harmless bacteria to compete with phytoplasmas inside the insects, Weintraub says.

    Researchers are turning to the newly sequenced phytoplasma genomes—and following up on recent results—to identify additional ways of interfering with crucial interactions between host proteins and phytoplasma virulence molecules. In addition, DNA sequence comparisons among different phytoplasmas could help scientists work out why some of the bacteria benefit insects while others harm them or seem to have no effect. Such findings, in turn, may suggest new control strategies.

    After eluding identification for decades, phytoplasmas are finally giving up some of their secrets. Exploiting those revelations to save trees and crops remains a challenge, however. But in a different tack, some scientists are already contemplating whether they can create plants with only the positive attributes of an infection. Perhaps adding a single phytoplasma gene to a plant's DNA could create bushy poinsettias or green peonies that don't carry the pesky pathogens.

  15. Mathematics

    A Good Sign

    1. Angela Saini*

    When existing notation for explaining a complex mathematical problem wasn't enough, Byron Cook teamed up with an artist to design his own.

    Symbolic art.

    Tauba Auerbach (left) and Byron Cook.


    Mathematical signs have the power to squeeze hundreds of pages of work into a few neat lines. But as math gets more complex, the symbols can show their limitations. Byron Cook, a senior researcher at Microsoft's research laboratory in Cambridge, U.K., and a professor of computer science at Queen Mary, University of London, learned this recently while solving a 70-year-old puzzle called the halting problem.

    Halting is the reason some computer programs occasionally get stuck in endless loops. It plagues PC users with the hourglass icon that never disappears, and Mac users with the rainbow-colored “spinning wheel of death.” Scientists have known for decades that there is no universal solution to the glitch. But in 2008, Cook made a breakthrough that could help to prove whether most real-life computer programs would come to an end or hang forever.

    Fellow computer scientists agreed that Cook's approximate solution was important, but the complicated set theory he used wasn't easy to explain. “When I was giving lectures or talking to product developers, I needed to get a lot of information across quickly, and this was getting difficult,” says Cook. Things got even tougher when he began to write a book on the subject.

    Mathematicians in this kind of pickle often devise shorthand notation to trim down the maze of formulae. “Historically, the abbreviation of complex texts has allowed some problems to be solved that were otherwise intractable,” says Patrick Ion, associate editor of Mathematical Reviews for the American Mathematical Society. “The introduction many centuries ago of, say, zero, or the equals sign, are thought to be turning points.” Roughly 2500 symbols are in use today, most of them variations on earlier symbols or letters of the alphabet.

    Sensible addition.

    These new math symbols are designed to be understood intuitively.


    Cook decided to take a bigger leap. He enlisted the help of a friend, New York City–based artist Tauba Auerbach, a former professional sign writer whose artwork has played with language and technology. For example, her 2005 show “How to Spell the Alphabet” explored the analog and digital ways we express characters, from the optician's eye chart to patterns in binary code. “I often read about math and physics, and they're integral to my art,” Auerbach says. “But there was of course a limit to how much of Byron's work I could understand. I brainstormed for days, and my goal was to capture the motion or trajectory of each function—the way the sets of numbers were being grouped or moved in a certain direction.”

    But when Cook tested the first drafts of the new notation on colleagues, they hated them. Some of the symbols were too ornate to be drawn freehand. One was binned for resembling a swastika; another for being too similar to a Japanese language character.

    Through trial and error, Cook andAuerbach learned that new symbols should build on old ones, so they make intuitive sense at first glance. “I knew thatTauba had been successful after I used her final symbols in a lecture and people hardly noticed. Before I knew it, my students were using them, too,” says Cook, as he sketches the nine symbols they created onto his whiteboard. The simplest one, representing a mathematical operation that gathers only the right-hand coordinates of a set of pairs of numbers, is a rounded, inverted “L” with an arrow at the end, pointing rightward. In two strokes, it expresses a concept that would otherwise take two lines to explain. Another symbol, which constructs a new numerical relationship for a set of variables using a mapping, is an L-shape with a “greater-than” symbol inside it.

    “I rather like their notations,” says David Bressoud, president of the Mathematical Association of America and a professor of mathematics at Macalester College in St. Paul, Minnesota. “It will be interesting to see if they get picked up beyond the computer science community.” That is likely to depend more on the value of Cook's research than on the beauty of the designs, Bressoud adds.

    Still, says Ion, “good notation has a tendency to beat out bad notation.” Leonhard Euler's decision to use the letter “e” for the base of the natural logarithm around 250 years ago, for example, was so successful that we still use it today. The battle continues, however, between Newton's choice to use a dot over “x” to represent a derivative, and Leibniz's “dx/dt” notation for the same thing.

    Although the verdict from his colleagues is still out, Cook says the new notation has made a “dramatic” difference to his own work. He estimates that more than 100 of his students and colleagues already recognize the symbols. Next, he hopes to have them implemented in LaTeX, the typesetting package that mathematicians commonly use to publish their work electronically. After that, Cook says, “our plan is to see what else needs simplification. We might invent some more.”

    Ion predicts that mathematical symbols are due for a bigger shakeup: “The way we think about notation is evolving. Color may come in, and as technology advances, maybe even animation.”

    • * Angela Saini is a freelance writer based in London.

  16. News

    Ourselves and Our Interactions: The Ultimate Physics Problem?

    1. Adrian Cho

    In the field of complex socioeconomic systems, physicists and others analyze people almost as if they were interchangeable electrons. Can that approach decipher society and what ails it?

    New model army.

    Janjaweed fighters interact with many other groups in a model of the Darfur conflict.


    ZÜICH, SWITZERLAND—Janusz Holyst sounds frustrated. “When I look to the textbooks on emotion, there are no numbers, there are no equations,” laments the theoretical physicist from the Warsaw University of Technology. Holyst hopes to supply what the books are lacking, however. He aims to develop the tools to analyze emotions quantitatively. Then he intends to use them to literally read your feelings.

    With €3.6 million of support from the European Commission, Holyst and eight collaborators aim to develop a computer program that can analyze dialogue from Internet chat rooms and tell when people are growing excited, angry, and so on. Flagging key words isn't enough, he says, because people use language differently. Some, for example, curse regardless of their mood. Ultimately, Holyst hopes to decipher group emotion, such as the euphoria that pervades a stadium full of sports fans when the home team wins. “There are tens of models of opinion formation,” he says, “but there are no models of emotion.”

    Holyst is one of a small but growing number of physicists who are turning from atoms and electrons to study social phenomena such as terrorism, the growth of cities, and the popularity of Internet videos. Joining with social scientists, they treat groups of people as “complex socioeconomic systems” of many interacting individuals and analyze them using conceptual tools borrowed from physics, mathematics, and computer science. Last month, 130 researchers of various stripes gathered here to discuss such work.*

    Forays into “sociophysics” began in the early 1970s. Physicists proposed, for example, that individuals interact to form public opinion much as neighboring atoms make a crystal magnetic by aligning their magnetic fields; researchers analyzed the social phenomenon by adapting the Ising model used to describe such magnetic interactions. In the 1990s, many physicists turned to economics in the controversial subfield of econophysics (see sidebar, p. 408). Now, the movement seems to be gathering momentum, as complex-systems researchers have made solid contributions in the study of traffic, epidemiology, and economics. Some are now tackling more-daunting problems, such as the emergence of social norms.

    “The problems are more complicated than most natural scientists assume, but less hopeless than most social scientists think,” says Dirk Helbing, a physicist-turned-sociologist at the Swiss Federal Institute of Technology Zürich (ETHZ). Oversimplification is a risk. “In some fields, physicists have a bad reputation for applying the Ising model directly” to cases in which it may not fit the facts, says Stephen Eubank, a physicist at Virginia Polytechnic Institute and State University (Virginia Tech) in Blacksburg who models epidemics.

    Nevertheless, physicists and social scientists are working together on increasingly nuanced and realistic models, Helbing says. The complex-systems approach could help avert—or at least explain—systemic crises such as the current global economic meltdown, he says. “We spend billions of dollars trying to understand the origins of the universe,” Helbing says, “while we still don't understand the conditions for a stable society, a functioning economy, or peace.”

    Complex is as complex does

    Scientists can't say exactly what a complex system is in the same way they can define an atom or a gene. Rather, they tend to describe how a complex system looks and behaves. “If you ask a biologist, ‘What is life?’ he will give you a list of characteristics. He probably won't give you a strict definition,” says Ingve Simonsen, a physicist at the Norwegian University of Science and Technology in Trondheim. “It's similar with complex systems.”

    First off, a complex system consists of many elements that interact so strongly that they tend to organize themselves in one way or another. This “emergent behavior” makes the group more than the sum of its parts. A car may be complicated, but it is not a complex system, as each of its parts interacts with a few others in a predictable way. But cars in traffic form a complex system, as drivers' jockeying for position can lead to surprises such as “phantom” traffic jams that arise for no obvious reason.

    Complex systems also tend to be persnickety. A tiny change among the pieces may lead to a big swing in the behavior of the whole system—as when, on 4 November 2006, the disconnection of a single power cable in northern Germany triggered regional blackouts as far away as Portugal.

    The systems are often hard to control and may be impossible to “tune” for optimal behavior. At the same time, “there are so many ways for a complex system to fail that it's impossible to prepare for them all,” says Eubank.

    Like the riveter's trade, the field is defined largely by its tools. Researchers borrow from statistical physics the theories that describe phase transitions—in which interactions among many atoms lead to dramatic changes in a material as a whole, such as the freezing of water into ice and the emergence of magnetism as hot iron cools—to probe how, say, public sentiment can suddenly turn against a once-popular war.

    Much work also relies on networks that represent individuals—such as victims of an epidemic—as points or “nodes” and the social ties between them as lines or “edges.” Researchers often use “agent-based models”: computer simulations in which myriad virtual agents interact like robots following predefined rules. Such models might help reveal, for example, how interactions between investors, hedge funds, and banks trigger market swings.

    What's the holdup?

    Researchers can explain how traffic flow breaks down—although they can't necessarily prevent it from happening.


    Behind it all lies the assumption that, at least within distinct types, people are like subatomic particles: basically the same. “We like to think that we are unique,” says Alessandro Vespignani, a physicist at Indiana University, Bloomington, who works on networks. “But probably for 90% of our social interactions, we are not so unique.” The approach appeals to a physicist's penchant for divining the simple mechanisms that underlie complex phenomena, Vespignani says: “Physicists bring a special balance between mathematical rigor and computational approaches and intuition for the problem. We are artists of the approximation.”

    Germs and traffic jams

    Over the past decade, complex-systems researchers have made some signal advances. For example, they have developed a deeper understanding of traffic, which, to a physicist, resembles a fluid flowing through a tube. There are key differences, however. Because the atoms in a fluid bounce off one another, a fluid typically speeds up when it flows through a constriction. Drivers avoid collisions at all costs, so when a highway narrows, traffic inevitably slows down.

    To capture the essence of traffic, scientists have to include the “forces” between vehicles—actually, drivers' reactions to one another. In the 1990s, physicists and others developed mathematical models that do that and can accurately simulate highway and city traffic, and those models are now built into commercial software used by municipal planners and others. Researchers can also explain surprising effects, such as how pedestrians passing in a hallway spontaneously form two opposing lanes and how putting an obstacle in their path can actually speed the flow.

    Complex-systems experts have also made contributions in epidemiology. In 2001, Vespignani and colleagues showed that in certain types of highly connected networks called “scale-free,” it's impossible to stop the spread of an epidemic no matter how many people are inoculated. Conversely, in 2003, Shlomo Havlin, a physicist at Bar-Ilan University in Ramat Gan, Israel, and colleagues found a simple strategy for inoculating against a disease that beats picking random individuals. By going a step further and picking randomly chosen friends of those individuals, health officials can, on average, inoculate people with more social ties through which to spread the disease.

    Some of the work is timely—even urgent. At the meeting, Vespignani presented a detailed model of the spread of the swine flu, H1N1, which includes the network of all the world's airline routes and detailed transit maps of major metropolitan areas. Such input allows researchers to predict not only the prevalence of a disease but also the geographical path of its spread, Vespignani says. His preliminary results suggest that 30% to 60% of Australians may catch H1N1 by October.

    Such efforts are working their way into mainstream epidemiology. Vespignani's work is funded by the U.S. National Institutes of Health, as is modeling by Eubank, and federal officials have sought the modelers' input on problems such as the spread of H1N1. “Some of this is an ongoing effort by midlevel people to convince higher-level people to let science play more of a role” in responding to epidemics, Eubank says.

    The hard social sciences

    Traffic and epidemics may look like physics problems, but researchers are also tackling phenomena that seem purely social. To probe the emergence of social norms—the unwritten rules that keep us from, say, asking others how much they earn—some are turning to the computer simulations of evolutionary game theory, in which myriad virtual players engage in logical contests. In one classic setup, neighbors on a grid face the “prisoners' dilemma”: Both are rewarded if they cooperate, and both are punished if they betray each other, or “defect.” But each reaps a bigger reward for being the only defector and a stiffer penalty for being the sole cooperator. The logic of the situation drives both to defect.

    To make the games more interesting, however, players' strategies can evolve. Players might imitate their most successful neighbor, or they might move closer to more-successful players. In either case, defectors dominate the field, Helbing finds. But imitation and migration together lead to the growth of colonies of cooperators.

    More intriguing, Helbing has set two populations playing the same game with different reward schemes and, hence, different dominant strategies. However, interactions can cause players from one group to adopt the other's strategy. That resembles the emergence of a norm, Helbing says, as the interactions cause the players to change their behavior. The result may seem remote from the proscription of nose picking, but norms are often arbitrary, Helbing notes, “and for that reason we think it's okay to abstract them from their content.”

    Other researchers are analyzing historical movements. Europe in the Middle Ages consisted of kingdoms, each ruled by a hierarchy of lords who collected taxes from those below them and passed them to those above. That system of indirect rule gradually gave way to one in which sovereigns ruled their countries directly, and Lars-Erik Cederman, a political scientist at ETHZ, has modeled that process.

    Cederman represents each lord by a spot on a map and a node in a network resembling tree roots that can rearrange to funnel money upward most efficiently. Indirect rule won't arise at all, Cederman finds, if a lord's might begins to wane at the manor doorstep. But as the lords' influence spreads further into the countryside—presumably because of technological progress—the network of lords simplifies and eventually disappears. “The model, we claim, is the first to capture this historical trend,” Cederman says.

    Even more ambitiously, Jürgen Scheffran, a political scientist at the University of Hamburg in Germany, hopes to model the impact of climate change on regional conflicts. He and his colleagues are modeling the ethnic conflict in the Darfur region of Sudan, where the southward expansion of the desert has driven Arab herders onto the land of sub-Saharan farmers. “This may be one of the first cases where climate change has already affected a conflict,” he says.

    Over the next 5 years, Scheffran and colleagues aim to reproduce the conflict in a detailed agent-based computer model. That daunting task requires quantifying the interactions between numerous actors, including farmers, herders, rebels fighting on behalf of the farmers, the Janjaweed militiamen who oppose them, the Sudanese government, aid organizations, and others. “Hopefully, we would get better strategies for limiting the violence,” Scheffran says.

    How the field of complex systems will evolve remains to be seen. It appears to be growing faster in Europe, where the European Commission has recently committed €20 million to such research in the next 4 years. Researchers in the United States face tougher funding prospects, says Neil Johnson, a physicist at the University of Miami in Florida, who fits modeling of terrorist networks in with work on the mathematical ecology of fish, a strong suit at the university.

    Still, with successes to build on and creative scientists tackling new problems, researchers are confident that further important results will come—even if they can't predict exactly where this network of idiosyncratic efforts will pay off most handsomely.

    • * International Workshop on Coping with Crises in Complex Socio-Economic Systems, 8–12 June 2009.

  17. News

    Econophysics: Still Controversial After All These Years

    1. Adrian Cho

    Econophysics is the biggest branch of complex-systems research, and physicists have flocked into finance. But many economists view econophysicists as dilettantes.

    In 1997, physicists Imre Kondor of the Collegium Budapest and János Kertész of the Budapest University of Technology and Economics organized a conference on the budding field of econophysics, which has since enjoyed a mixed reputation. It is the biggest branch of complex-systems research, and physicists have flocked into finance. But many economists view econophysicists as dilettantes. “Shortly after this conference, I went to work in a bank, and I never met any animosity at all,” Kondor says. “The reaction of the academic community has been markedly different than that of the practitioners.”


    Big market swings are inevitable, econophysicists warn.


    Traditional economic theory is fundamentally flawed, econophysicists say. It relies on “representative agent models” in which a hypothetical average Joe interacts with monolithic economic forces. Such models ignore correlations that lead to, say, booms and busts. To prove rigorous theorems, economists assume that market fluctuations follow a bell-shaped “Gaussian distribution,” which underestimates the probability of big swings. “Traditional economics is about creating mathematical models that are well-defined, tractable, and have nothing to do with reality,” Kertész says.

    Econophysicists claim to take a more data-driven approach. They have yet to score a major breakthrough, but they say their contributions are gaining wider acceptance. For example, econophysicists have introduced new tools to analyze correlations among stocks and more efficiently optimize a portfolio, Kertész says. They have also exploited new types of data, he says, such as high-resolution records of the smallest movements in a market and a market's limit order book, which records every offer to buy or sell a stock regardless of whether it leads to a transaction.

    Physicists are also helping to change the emphasis in research, Kondor says. Economists have long known that, for example, the returns on stocks do not fluctuate up and down as mildly as their models assume. The real distribution of returns has “fat tails” at its extremes, which means very large gains and losses are far more likely than assumed. But most economists continue to view such events as flukes instead of game-changing inevitabilities. “The physicists were the ones who started to systematically work through the consequences of the fat tails,” Kondor says.

    Still, econophysics does not impress some economists. “In my opinion, it is without influence and will continue to be without influence,” says Ernst Fehr of the University of Zürich in Switzerland. Physicists apply their models to problems without grasping the details, he says. “I employed a physicist once, and I was really disappointed.”

    But Thomas Lux, an economist at the University of Kiel in Germany, says that, especially in Europe and Japan, growing numbers of economists are searching for alternatives to standard economic theory. “Interactions are what produce the economy, but economics ignores interactions,” he says. In contrast, the multiple-agent models favored by econophysicists are ideal for scrutinizing interactions, Lux says.

    With droves of physicists in finance, some critics have blamed them for the current global economic meltdown. That is unfair, econophysicists say: Most physicists working in finance were asked to devise and evaluate instruments—such as the notorious credit default swaps—with the tools they were given, not to invent better tools. Moreover, says Didier Sornette of the Swiss Federal Institute of Technology Zürich, “physicists have been the most critical of the axioms underlying the pricing, hedging, and risk analysis associated with these instruments.”

  18. News

    Counterterrorism's New Tool: ‘Metanetwork’ Analysis

    1. John Bohannon

    Researchers have created sophisticated new programs to probe beneath the surface of social interactions. How well do they work against terrorists?

    PALO ALTO, CALIFORNIA—In a comfortable Silicon Valley boardroom, a world away from the hellish violence of Iraq, Shyam Sankar projects a satellite map of Baghdad on a screen. “Now let's look at the geospatial distribution of significant acts,” says the software engineer.

    With a few clicks of his computer mouse, he creates a timeline below the map. What looks like a city skyline rises up over April and May 2008. But rather than skyscrapers, each bar represents a daily tally of carnage: suicide bombings, shootings, mortar attacks, and improvised explosive device (IED) detonations. The data come from a collaboration of researchers at Princeton University and the U.S. Military Academy at West Point in New York state. They are looking behind these known events to a shadowy network of insurgents. As Sankar slowly sweeps his cursor across the timeline, Baghdad's Sadr City district explodes with splotches of color—a heat map of violence. “The attacks turn out to be correlated over time and space with the construction of this security wall,” says Asher Sinensky, tracing a line across the district.

    Conspiracy map.

    The network around terrorist Noordin Mohammed Top.


    Insights like these are crucial for U.S. and Iraqi forces trying to predict the insurgents' next moves. Sankar, 27, and Sinensky, 29, are “forward-deployed engineers” here at Palantir Technologies, the software company behind this data analysis platform. Business is booming.

    “Here's work we're doing with the Naval Postgraduate School,” says Sinensky. A blizzard of tiny boxes appears, all interconnected by a web of lines. “This is the network of people connected to Noordin Mohammed Top,” a recruiter for the group Jemaah Islamiyah and Southeast Asia's most wanted terrorist. The graph represents the suspects' known communications and relationships, as well as their known involvement with terrorist plots, distilled from unclassified data provided by the International Crisis Group headquartered in Brussels. “You have this huge ball and it's somewhat meaningless,” Sinensky says. He selects a command from a drop-down menu.

    “So the postgraduates built these plug-ins to apply classic social network analysis techniques, like betweenness, centrality, and eigenvector centrality.” He executes a command and names appear, ranked by a calculation of their importance as nodes in the network. “It's no surprise that Noordin is first,” says Sinensky. “But what about these next two? Maybe these are people I should focus some resources on.”

    A decade ago, most research on social networks was abstract and academic. But in the wake of the 11 September 2001 attacks, “there was an explosion of interest” in applying this research to warfare, says Kathleen Carley, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania. Palantir is just one of many companies vying for a piece of the military funding. Academic network scientists such as Carley are also diving in, competing for lucrative U.S. military contracts and grants.

    In spite of the boom, there is sharp disagreement about how effective social network analysis has been for counterterrorism. Some worry that in the rush to catch terrorists, the U.S. military has put too much faith in social network analysis. One former U.S. official even claims that applying these methods in war zones has led to unethical practices (see sidebar, p. 410).

    Tangled webs

    Just weeks after the 2001 plane hijackings in the United States that killed some 3000 people, it emerged that the attacks were not the work of a government but a team of international terrorists: 19 hijackers and dozens of people providing funding and logistical support. “The intelligence community was in a complete hysteria,” says Marc Sageman, a forensic psychiatrist who has analyzed this and other militant networks for the U.S. government since the 1980s. U.S. government officials “turned to anyone” who could help assess this new threat and prevent another attack.

    Among the first to be tapped was Valdis Krebs, a management consultant who studies social networks within organizations. “A terrorist cell is essentially a project team like any other,” he says. The media provided a “nonstop stream” of information about the 11 September network—including their meetings, residences, and financial transactions around the world—which Krebs used to map a “quick and dirty” social network. Krebs published his analysis in December 2001 in Connections, the journal of the International Network for Social Network Analysis. One node in particular—Mohamed Atta—stood out in his network graph. “Atta scores the highest on all network centrality metrics—Degrees, Closeness, and Betweenness,” Krebs concluded in his paper. “Degrees reveals [the intensity of] Atta's activity in the network. Closeness measures his ability to access others in the network and monitor what is happening. Betweenness shows his control over the flow in the network—he plays the role of a broker.” Atta was indeed the ringleader and a member of al-Qaeda, the terrorist organization that claimed responsibility for the 11 September attacks through its spokesman, Osama bin Laden. Soon after, says Krebs, he and other network researchers were “invited to many meetings and briefings in Washington, D.C.”

    By 2003, U.S. defense officials had expanded the web of threats beyond the 11 September terrorists to include networks of “insurgents” in Afghanistan and Iraq. In October of that year, the U.S. Army created a research task force devoted to countering IEDs, which the U.S. Department of Defense described as “the weapon of choice for adaptive and resilient networks of insurgents and terrorists.” Part of the strategy was to apply network analysis to the available data—from types of devices to people funding, building, and deploying them.

    According to Sageman, the results were disappointing. “The network approach didn't really work to catch bad guys,” he says. It was limited partly by the rigidity of the underlying field of mathematics, graph theory. “We are good at modeling static networks,” he says, “but networks like these change over time. And we don't yet have a dynamic graph theory.” When one terrorist is caught or killed, for example, “he is replaced by a cousin” with different social links. “Changing a single link can completely change the graph,” he says, but the theory doesn't accommodate this. It's even harder to accommodate growth. “We don't have a decay function that can reliably remove the noise,” says Sageman. As a result, the bigger your network model grows, the noisier its gets—and “the less you see.”

    Deadly clues.

    Researchers have struggled to model the Iraqi insurgent networks behind IED attacks.


    The flaws of counterterrorism network analysis, according to Krebs, run deeper than math. “It is also about understanding sociology,” he says. No matter how good a network model is, it can't provide insights if researchers aren't “paying attention to the right things and therefore collecting the right data.” Without accounting for the content of communication, social network analysis runs into the “pizza delivery guy problem”: confusing regular contact with significant contact.

    As an illustration of the problem, Sageman points to a report on the 7 July 2005 bombing of the London Underground, issued in May by the U.K. Intelligence and Security Committee, titled Could 7/7 Have Been Prevented? Using network analysis, the researchers traced the relations between plotters, yielding a chaotic tangle of links. “They couldn't learn anything from this graph,” says Sageman. “It's a hairball!” (U.K. government officials declined to provide Science with a publishable high-resolution version, citing “security reasons.”)

    Going meta

    According to Ian McCulloh, a U.S. Army major who teaches network analysis at the Military Academy, the way forward is a technique called “dynamic metanetwork analysis.” McCulloh learned the technique from Carley, his adviser for a Ph.D. he completed at Carnegie Mellon last year. Carley and McCulloh say their models can deal with change in terrorist networks over time. And whereas classic social network analysis deals only with the question of “who” in networks, says Carley, “metanetworks include the who, when, what, where, why.” By capturing these layers, metanetworks “begin to get at culture,” she says. “You have to go beyond the communication network to consider the distribution of norms, attitudes, and beliefs, … the distribution of roles across gender, ages, and subgroups,” she says. “But the programs to do that go way beyond general social network analysis.”

    Carley has developed computer programs of her own to do that. She says that one of them, the Organizational Risk Analyzer (ORA), helps analysts “use information about people to ‘connect the dots.’ Then, ORA examines this network and finds those dots, those people, who represent a threat to the overall system.” The program uses both network theory and social psychology to calculate people's “cognitive demand,” which Carley defines as a measure of things such as “how many others they need to interact with, how many activities they are involved with, how complex those activities are, [and] how many resources they need to handle.” In the case of a business, an analyst can use ORA to identify employees who are crucial for a company's survival. The same applies for a network of insurgents.

    McCulloh and Carley used metanetwork analysis to analyze 1500 videos made by insurgents in Iraq. “The insurgents would videotape most of their attacks as propaganda,” says McCulloh. “As of March 2006, we had something like almost three out of every four U.S. deaths [on tape].” Carley extracted data from these videos, he says, “made a big network out of it, and ran a fragmentation algorithm which clustered them into little groups. And when you go back and look at the videos in those groups, you see forensic clues that identify who some of the insurgent cells were.” The details extracted from the videos are classified, “because we worry that the insurgents will learn what we're using,” McCulloh says. He and Carley worked with the U.S. military to “operationalize” the technique in Iraq. U.S. commanders there are faced with too much information and too little time to act on it. McCulloh says that Carley's metanetwork software helps them find clues and patterns—boosting the chances of catching or killing insurgents.

    McCulloh claims that the technique has yielded dramatic results. “Sniper activity in Iraq is down by 70%,” he says, and he's confident that IED deaths also dropped because of the insights provided by Carley's programs, although he can't cite data. “It's a simple application of metanetwork analysis,” he says.

    But Sageman is skeptical that military progress in Iraq can be chalked up to network analysis. “I'm not convinced [metanetworks] have helped at all,” he says. “An easier explanation [for the drop in sniper attacks] might be the tribal uprising” against the insurgency in Iraq. “There's no way to know, and that's a big problem with this field in general.” Carley counters that Sageman “doesn't understand the methods.”

    If not all researchers are sold on counterterrorism network analysis, the U.S. military certainly is. The Army established a network science center in Aberdeen, Maryland, 2 years ago. This year, the U.S. Army Research Lab is committing $162 million to a new program, the Network Science Collaborative Technology Alliance, to get academic, industry, and military researchers working on “network-centric warfare.”

    Carley is one of the academics applying for military funding. She says that network analysis is ready for war. A decade ago, models could handle only simple information about “hundreds” of people at once. But now, she says, “network analysis tools can handle millions or tens of millions of nodes.”

  19. News

    Investigating Networks: The Dark Side

    1. John Bohannon

    A few months ago, Lawrence Wilkerson, a former U.S. State Department official and Army colonel, painted a nightmare scenario of how social network science can be applied in a battle zone, outlining something he called "the mosaic philosophy."

    A few months ago, Lawrence Wilkerson, a former U.S. State Department official and Army colonel, issued a scathing criticism of how the United States has conducted war in recent years. He also painted a nightmare scenario of how social network science can be applied in a battle zone. Describing how U.S. forces gathered intelligence to identify networks of insurgents after the 2003 invasion of Iraq, Wilkerson outlined something he called “the mosaic philosophy.” The strategy, he claims, was similar to sequencing a genome. But instead of assembling millions of strands of DNA, investigators worked with data from interrogations of thousands of civilian prisoners.

    Wilkerson wrote in a 17 March 2009 article at The Washington Note—a political commentary Web site—that the U.S. military relied on network analysis “computer programs” so that “dots could be connected and terrorists or their plots could be identified.” Now based at the New America Foundation, a think tank in Washington, D.C., Wilkerson wrote that “it did not matter if a detainee were innocent.” According to Wilkerson, the objective of the mosaic approach was to “extract everything possible … to have sufficient information about a village, a region, or a group of individuals. … Thus, as many people as possible had to be kept in detention for as long as possible to allow this … to work.”

    Wilkerson told Science that his allegation is based on “classified documents to which I had access from 2000 to 2005” as chief of staff for former U.S. Secretary of State Colin Powell. He puts the total number of people detained in Iraq and Afghanistan at 50,000. He says he does not know which computer program or researchers were involved. The U.S. Department of Defense declined to comment. The allegation could not be independently verified.

    Analytical fodder?

    A former U.S. official alleges that innocent people in Iraq and Afghanistan were interrogated to feed data into terrorist network models.


    The general strategy of casting a wide net for intelligence gathering was familiar to all network researchers contacted by Science (see main text), but many expressed disbelief that it was carried out on such a grand scale in Iraq and Afghanistan. “Very scary if true,” says Marc Sageman, a network researcher and longtime U.S. military contractor, but it would be “incredible.” He adds that it would never work, even if it were tried.

    Researchers who create network analysis computer programs have had similar reactions. Software engineers at one of the industry leaders, Palantir Technology in Palo Alto, California, say they had never heard of such abuses. And Wilkerson's “computer program” could not have been theirs, says Palantir CEO Alexander Karp, because the company only recently started courting the U.S. military for contracts. “If we ever learned that something like this was going on, we would immediately pull out,” he adds.

    Only one researcher contacted by Science had heard of the mosaic philosophy. “It's not a term used in academia. It's used in the military,” says Kathleen Carley, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, who does network analysis research for the U.S. military. “I am not part of the mosaic thing.” Carley does not dismiss the strategy as ineffective. Without knowing exactly how the interrogations and data analysis were carried out, she says, “I don't think you can decide whether it's unworkable or not.”

    Asked if the mosaic philosophy is actively applied by U.S. military or intelligence agencies, Wilkerson says emphatically that it is not. Revelations about the torture of prisoners by U.S. forces have “caused the pendulum to swing the other way,” he says. “The only place I still hear about its applicability and possible use is at the [National Security Agency]. … There, its use is to mine huge databases comprised of information gained from e-mails and telephone calls.”

    “Is this the dark side of networks? … I think it probably is,” says Brian Uzzi, a network scientist at Northwestern University in Evanston, Illinois. “All powerful methods grow a dark side. Their power is eventually used irresponsibly. I think the real fear here is that a method that has a reputation for finding new insights is falsely used.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution