News this Week

Science  05 Sep 2008:
Vol. 321, Issue 5894, pp. 1278

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Whole-Genome Data Not Anonymous, Challenging Assumptions

    1. Jennifer Couzin

    Last week, scientists learned that a type of genetic data that is widely shared and often posted online can be traced back to individuals who proffered up their DNA for research. The revelation, in a paper published in PLoS Genetics, prompted the National Institutes of Health (NIH) in Bethesda, Maryland, and the Wellcome Trust in the United Kingdom to strip some genetic data from their publicly accessible Web sites, and NIH to recommend that other institutions do the same.

    The concern is with studies in which researchers pool genetic data from hundreds of people to look for broad patterns of genetic inheritance. Because the pool consists of DNA from so many people, the assumption has been that it would be impossible to identify any one individual's DNA. The new study suggests that's not the case. NIH officials and others agree that the likelihood of a breach of privacy is low, largely because the pooled data must be matched against a particular person's isolated DNA—something that, currently, only researchers generally have access to.

    But the discovery that these DNA pools don't protect anonymity is still troubling, especially because no one had considered that a possibility. The first response to the results “is, ‘You're crazy,’” says David Craig, a geneticist at the Translational Genomics Research Institute in Phoenix, Arizona, who conducted the work. Less than 9 months ago, NIH was so confident in the anonymity of pooled genetic data that it recommended it be made public for all researchers to use.

    Craig found this confidence misplaced, for a simple reason: Geneticists now routinely examine hundreds of thousands of DNA variants, called single-nucleotide polymorphisms (SNPs), at a time, instead of hundreds as they did just a few years ago. As a result, they're gathering enough information about the pattern of SNPs in a pooled sample that it's feasible to deduce whether a particular individual, with her own unique SNP blueprint, is represented in a much bigger pool of DNA—even if that person's DNA was less than 1% of the mix. Craig and his colleagues managed to do this by ascertaining the distribution pattern of every single SNP—essentially, asking the same question 500,000 times. They were successful because, it turns out, every individual shifts a genetic pool subtly in certain directions, and studying enough SNPs unveils the pattern of those shifts. The biggest chance of error comes from false positives from relatives whose DNA may also appear in the pool, says Craig.

    Faceless no more.

    A new study shows that individuals can be pinpointed in pooled DNA.


    NIH officials were startled when Craig notified them of his findings about 2 months ago; they had their own statisticians repeat the experiments. “They said, ‘Yup, this works,’” says Elizabeth Nabel, head of NIH's Heart, Lung, and Blood Institute. “We still consider the risk to the individual relatively low,” she continues, but “there's a window of vulnerability.”

    The greatest concern is that identifying an individual this way could reveal sensitive health information. Genome-wide association studies compare data from people with and without a particular disease, so knowing which pool a person falls into can convey whether they have, say, cancer, or diabetes, or multiple sclerosis. “We have a false sense of security with pooled data,” says Pablo Gejman, a psychiatric geneticist at Northwestern University in Evanston, Illinois. “There is sensitive information” here.

    The Wellcome Trust has pulled data on about a dozen common diseases, and NIH has pulled data from nine genetic studies off two sites, dbGaP, which includes genome-wide association studies, and CGEMS, a site for cancer genetics work. The seven affected studies on dbGaP had been downloaded by about 1000 people all told, says James Ostell, who oversees that and other NIH databases.

    NIH officials are informing geneticists about the policy change through e-mails and their Web site; the Broad Institute in Cambridge, Massachusetts, has followed suit and removed pooled data from its site. This is “a logical choice, a necessary choice,” says Michael Boehnke, a statistical geneticist at the University of Michigan, Ann Arbor, whose data from a diabetes study was taken down from NIH.

    Nabel says that NIH is considering a new policy in which the pooled data will be released to researchers who apply, as is now the case with data traditionally considered much more sensitive.

    Still, Ostell and others say the current privacy risk is minimal. It could be of more concern 5 or 10 years from now, as genetic information proliferates. One possible scenario is that law enforcement agencies might turn to pooled data to determine whether their suspect is present—and even demand that the researcher help them identify him.

    Craig's work could help future forensic investigators in another way: Currently, they're unable to identify a suspect's DNA in a mixed sample—say, a sample of blood from several people—if the suspect's blood is less than 10% of the total. “A lot of forensic crime samples do have small contributions from people of interest, [and] right now we can do essentially nothing,” says Bruce Weir, a biostatistician who studies genetics and forensics at the University of Washington, Seattle.


    China Plans $3.5 Billion GM Crops Initiative

    1. Richard Stone*
    1. With reporting by Chen Xi and Jia Hepeng.

    BEIJING—Confronted with land degradation, chronic water shortages, and a growing population that already numbers 1.3 billion, China is looking to a transgenic green revolution to secure its food supply. Later this month, the government is expected to roll out a $3.5 billion research and development (R&D) initiative on genetically modified (GM) plants. “The new initiative will spur commercialization of GM varieties,” says Xue Dayuan, chief scientist on biodiversity at the Nanjing Institute of Environmental Science of the Ministry of Environmental Protection.

    A central aim is to help China catch up with the West in the race to identify and patent plant genes “of great value,” says Huang Dafang, former director of the Biotechnology Research Institute of the Chinese Academy of Agricultural Sciences in Beijing. Once intellectual property rights are in place, says Huang, transgenic technology could transform Chinese farming “from high-input and extensive cultivation to high-tech and intensive cultivation.”

    In the decade since China first allowed commercial planting of four GM crops, the government has moved cautiously, granting only two further approvals for small-market species: poplar trees and papaya (see table). Currently, just one GM crop—insect-resistant cotton—is now planted widely, says Xue. China has balked at commercializing GM versions of staples such as rice, corn, and soybeans.

    Slim pickings.

    Of the six plants that China has approved for commercialization, only cotton is grown widely. A new initiative could pave the way for GM versions of the biggest prize of all: rice.


    That may change, as China's leadership has thrown its weight fully behind GM. “To solve the food problem, we have to rely on big science and technology measures, rely on biotechnology, rely on GM,” Premier Wen Jiabao told academicians last June at the annual gathering of the Chinese Academy of Sciences (CAS) and the Chinese Academy of Engineering. China's State Council, which Wen leads, approved the GM initiative in July.

    Details of the new initiative, including which crops will gain initial support, are being hammered out, scientists say. Some funds will go to R&D on transgenic livestock, an area that has lagged behind GM crops. By 2006, the Chinese government had granted permits for 211 field trials of 20 GM crops, including the six approved for commercial production. As in other countries, the varieties that China has commercialized so far are equipped with genes to resist pests, tolerate herbicides, or stay fresh longer—not genes that directly boost yields.

    Proponents note that China's cautious embrace of transgenic technology has yielded a major success story: GM cotton. Introduced into commerce in 1997, 64 varieties of pest-resistant cotton are now grown on 3.7 million hectares, or about 70% of the area devoted to commercial cotton, averting the use of 650,000 tons of pesticides, says Huang.

    The big prize is GM rice. Three years ago, Huang Jikun, director of CAS's Center for Chinese Agricultural Policy in Beijing, and colleagues reported that field trials of GM rice in China were going well—boosting yields and reducing pesticide use on plots—and predicted that the varieties were on the threshold of commercialization (Science, 29 April 2005, p. 688). But the Chinese government is reluctant to tinker with the country's most important crop and has put off commercialization. The new initiative might break the logjam, says Huang Jikun. “I hope the commercialization of GM rice will come within a couple of years,” he says.

    Although the central government has not released a budget figure for the new initiative, a spokesperson for the Ministry of Agriculture told Science that it would cost $3.5 billion over 13 years. Half is expected to come from local governments on whose land GM crops will be grown and from agricultural biotechnology companies. “It's a new way to support a big science project in China,” says Huang Dafang. Another departure from other R&D initiatives, he says, is that each funded program is expected to produce an economic payoff.

    One component of the initiative will be to educate the public about GM crops, says Huang Jikun. Although China is unlikely to see the sort of protests that have derailed field trials and commercialization in Europe, there are currents of disquiet in the general population. “For consumers, the safety of GM crops is the biggest worry. Just like some people are afraid of ghosts, some people are afraid of GM crops,” says Zeng Yawen of the Biotechnology and Genetic Resources Institute of the Yunnan Academy of Agricultural Sciences in Kunming. Although Zeng believes that GM food safety will be demonstrated adequately, he worries that the new initiative will push China to “move too fast to commercialize GM varieties.”

    But with questions mounting about China's ability to feed itself, others contend that not pushing ahead with GM varieties could be more detrimental than any theoretical hazard. “Any kind of new technology may have risk,” says Huang Dafang. But legitimate concerns, he says, should not be overshadowed by scare tactics designed to “mislead the public in the name of environmental protection.” With the country's leaders firmly behind GM crops, it's unlikely that any protests would get very far.


    A Detailed Genetic Portrait of the Deadliest Human Cancers

    1. Jocelyn Kaiser

    Three studies published this week have given researchers their most detailed look so far at the genetic mutations that underlie the deadliest of human cancers: pancreatic cancer and the brain tumor glioblastoma. They have firmed up the role of key genes and also found that scores of aberrant genes are involved in relatively few cell signaling pathways. One study also unearthed a gene never before linked to cancer that is mutated in a substantial fraction of glioblastoma tumors. “It shows we can still be surprised” by the biology of cancer, says Michael Stratton, who oversees a cancer gene sequencing project at the Sanger Institute in Hinxton, U.K.

    These studies are all based on the premise that information gleaned from systematically cataloging the main mutations in tumors will be worth the high cost. Three years ago, when genome sequencer Eric Lander of the Broad Institute in Cambridge, Massachusetts, proposed spending $1.5 billion on what is now called The Cancer Genome Atlas (TCGA), skeptics helped persuade the U.S. National Institutes of Health to start with a 3-year, $100 million pilot project. One of the glioblastoma studies is the first fruit of that effort.

    Meanwhile, a team led by Bert Vogelstein, Kenneth Kinzler, and Victor Velculescu at Johns Hopkins University in Baltimore, Maryland, had begun a private cancer genome project, starting with breast and colorectal cancer (Science, 8 September 2006, p. 1370). Now this team and collaborators have sequenced the coding regions of 20,700 genes—nearly all the known genes in the human genome—in 22 glioblastoma and 24 pancreatic cancer samples. They also looked for abnormalities in gene copy number and gene expression.

    In two papers published online by Science this week ( and -1164368), they report finding hundreds of genes that were mutated in these two cancers. There were an average of 63 altered genes in each pancreatic tumor and 60 per glioblastoma. The mutations varied from tumor to tumor, but the most important tended to fall in the same cell pathways. For example, 12 specific pathways were disrupted in at least 70% of pancreatic tumors. “It points to a new way of looking at cancer,” says Vogelstein, who suggests that treatments should target these pathways, not the products of single genes.

    One of the altered genes found in the glioblastoma study, IDH1, appeared in 12% of tumors, and more often in younger patients and those with secondary tumors, the Johns Hopkins team reported. A change in an amino acid of the encoded protein seems to help patients with this mutation live longer than others with glioblastoma.

    The third study, published online by Nature, analyzed more than 200 glioblastoma samples. It surveyed all the samples for genetic alterations such as changes in copy number and probed about half the samples for mutations in 600 genes already implicated in cancer, says co-leader Lynda Chin of the Dana-Farber Cancer Institute in Boston (Science, 4 July, p. 26). The study found many of the same aberrant genes that the Johns Hopkins team uncovered—but not IDH1, which was not among the genes the team sequenced. Their larger sample set will serve as a reliable reference on how frequently mutations occur in glioblastoma, including several genes for which the evidence was limited until now, says Chin. Having methylation data and samples from patients who received treatment also allowed the team to finger mutations in DNA repair genes that may help explain why tumors that initially respond to temozolomide, the main drug for glioblastoma, can become resistant to subsequent therapies.

    Probing a killer.

    Two new studies tally genetic glitches that cause the brain tumor known as glioblastoma, orange in this image of brain cells.


    TCGA is preparing follow-on papers, for example on using the molecular data to classify subsets of tumors, Chin notes. It will also expand the search: The project, which is also studying lung and ovarian cancers, will use new technologies to sequence thousands of genes in each tumor.

    “I see them [the public and private glioblastoma studies] as wonderfully complementary,” says pathologist Paul Mischel of the University of California, Los Angeles, who studies glioblastoma. Other researchers who hope to use the findings to improve cancer treatment agree. “This is a start and a wonderful start,” says Santosh Kesari, a neurooncologist at Dana-Farber.


    Hippocampal Firing Patterns Linked to Memory Recall

    1. Greg Miller

    The hippocampus, tucked deep inside the temporal lobes of the brain, has been intensely studied for its role in recording memories. Now two studies—one with rats and one with people undergoing surgery for intractable epilepsy—suggest that patterns of neuron firing in the hippocampus are also involved in recalling past experiences.

    Memory aid.

    A rat's hippocampus (above) generates sequences of neural firing that may help it remember what to do next.


    “The two papers are significant because they point directly to reactivation of neural activity sequences as a mechanism for memory recall,” says Edvard Moser, a neuroscientist at the Norwegian University of Science and Technology in Trondheim. Such a mechanism may underlie several functions attributed to the hippocampus, Moser says, including navigation, memory, and planning future actions.

    In the rat study, researchers led by Eva Pastalkova and György Buzsáki of Rutgers University in Newark, New Jersey, simultaneously recorded the activity of scores of hippocampal neurons as rodents ran through a maze shaped like a squared-off figure eight. The rats always started the maze by running down the middle of the three arms and then chose to continue down either the left or the right arm. The researchers trained them to alternate between the right and left arms each time they ran the maze. In between runs, the rats spent 10 to 20 seconds on a running wheel.

    During this delay period, neurons in the hippocampus fired in sequences that predicted which arm the rat would run next, the researchers report on page 1322. Even in the few cases when a rat goofed and went the wrong way, the preceding firing sequence predicted its mistake. These sequences—which resemble sequences that occur as a rat actually runs through a maze—likely represent the brain's internal mechanism for planning (or reminding itself) what it has to do next, Buzsáki says.

    The findings confirm a decade-old prediction that the hippocampus might generate such firing sequences to maintain important information during a delay in a task, says David Redish, a neuroscientist at the University of Minnesota, Minneapolis. Redish notes that consistent patterns of activity emerged only when the rat had something to remember. “When the rat is just running on a wheel for the heck of it in its home cage, they don't see it.”

    In the human study, published online this week in Science (, researchers led by Hagar Gelbard-Sagiv of the Weizmann Institute of Science in Rehovot, Israel, and Itzhak Fried of the University of California, Los Angeles, recorded from hundreds of neurons in and around the hippocampus of 13 epilepsy patients undergoing operations in which surgeons introduced electrodes into the brain to locate the source of their seizures. The patients watched several 5- to 10-second video clips that depicted a variety of landmarks, people, and animals. A few minutes later, the researchers asked the patients to freely recall the clips they'd just seen and call them out as they came to mind. (Most subjects easily remembered almost all of the clips.) The first time the patients saw the clips, many neurons in the hippocampus and a nearby region, the entorhinal cortex, responded strongly to certain clips and weakly to others—preferring a clip from The Simpsons, say, to ones showing Elvis or Michael Jordan. Later, each neuron began firing strongly a second or two before the subject reported recalling that neuron's preferred clip, but not when the subject recalled another clip.

    “Previous work [with animals] has shown that such reactivation occurs during sleep as well as during certain behaviors where memory is needed, but it has remained unclear whether reactivation actually reflects recall of the memory,” say Moser. Fried's findings are exciting because they provide the first direct link between reactivation of hippocampal neurons and conscious recall of a past experience, says neuroscientist Matthew Wilson of the Massachusetts Institute of Technology in Cambridge.

    Both studies have implications for an ongoing debate about the relationships among various functions attributed to the hippocampus, says Lynn Nadel, a neuroscientist at the University of Arizona in Tucson. Nadel says that the findings fit with his view that the neural mechanisms underlying spatial navigation, episodic memory, and action planning may be one and the same. “One might say at this point that the available data suggest that the hippocampus is critical for ‘navigating'through space not only in the present but also in the past, to retrieve memories, and in the future, to predict the results of actions,” Nadel says.

  5. MATHFEST 2008

    Shapeshifting Made Easy

    1. Barry Cipra


    Mathematicians aren't squeamish about doing dissections, but they do often come unhinged. Now computational geometers at the Massachusetts Institute of Technology (MIT) in Cambridge have proven it's possible to do mathematical dissections without falling to pieces.

    The victims in this case are not frogs but polygons: simple geometric shapes bounded by straight sides. In the early 19th century, mathematicians proved that any two polygons with the same area can be cut into a finite number of matching pieces. For example, it's possible to cut a square into four pieces and rearrange them into an equilateral triangle.

    Location, location, location.

    A square can become an equilateral triangle without ever falling apart (top). The same is true for other pairs of polygons. The proof starts with a trick that, in effect, moves hinges around (bottom).


    About 100 years ago, the English mathematician and puzzle designer Henry Dudeney added an extra wrinkle to the dissection challenge: He showed that the rearrangement from square to equilateral triangle can be done with pieces connected by hinges (see figure, above). Dissection enthusiasts have since devised many more hinged transformations.

    In 1997, Greg Frederickson, a computer scientist and geometric-dissection buff at Purdue University in West Lafayette, Indiana, asked whether what Dudeney did for the square and triangle can be done for any two polygons. The question caught the attention of Erik Demaine, then beginning graduate work in computer science. A decade later, Demaine, now a professor at MIT, has the answer: in a word, yes.

    Demaine returned to Frederickson's problem last fall with his father, Martin Demaine, and four students in a problem-solving seminar: Timothy Abbott of MIT, Zachary Abel and Scott Kominers of Harvard University, and David Charlton of Boston University. The group came up with a general procedure for turning an arbitrary dissection into a hinged dissection. Demaine described their proof at MathFest. “It was a surprising result to me, because I thought it was false,” he says.

    Their proof starts with an idea “so crazy that we never thought of it,” Demaine says. That idea is simply to take an unhinged dissection of one polygon and arbitrarily add hinges, then subdivide the pieces and add additional hinges until the polygon can contort into its equal-area partner. The key step is to show that judicious subdivision can, in effect, take a hinge that connects, say, piece A to piece B and move it to connect A to C (see figure).

    “The movement is magical,” Frederickson says. On the other hand, he notes, “you don't get very pretty dissections this way.”

    The construction works on three-dimensional (3D) dissections as well, which could help guide the design of reconfigurable robots—modular machines that rearrange their parts like real-life Transformers. In 3D, unfortunately, equal volume doesn't guarantee the existence of a dissection. But when dissections do exist, the MIT group's construction shows that they can be refined into hinged dissections. The results are an encouraging first step toward applications, Demaine says: “Now the optimization begins.”

  6. MATHFEST 2008

    Sweet Inspiration

    1. Barry Cipra


    Geometers find ideas everywhere. Take Mozartkugel, the famously spherical chocolate confections from Austria. Erik Demaine, his father, Martin, and colleagues John Iacono at the Polytechnic Institute of New York University and Stefan Langerman at Université Libre de Bruxelles have worked out a more efficient way to wrap them.

    As mapmakers know from trying to go the other way, flattening a globe invariably distorts areas on its surface. Conversely, wrapping a globe with an inflexible wrapper (such as foil) crinkles the wrapper with infinitely many tiny folds. As a result, the area of any wrapper must exceed the surface area of the chocolate ball (4π square units for a ball with a radius of 1).

    One popular brand of Mozartkugel comes in a square foil of side length π√ (π times the square root of 2). Another comes in a π ×. 2π rectangular wrapper. In each case, the wrapper's area is 2π2—some 57% greater than the surface area of the sphere. Demaine and crew set out to see if they could do better.

    The computational chocolatiers found that they could achieve a 0.1% savings over current practice with an equilateral triangle whose area turns out to be approximately 1.9986π2. (The exact value for 1.9986 … is a messy formula involving, for no obvious reason, the square root of 57.) But in fact, all that really covers the kugel is a three-leaf petal inside the triangle (see figure). That means the tips of the triangle can be cut off, leaving a wrapper of area 1.8377π2.


    The clipped triangular wrapper offers another advantage: The length of its perimeter, 5.3503π, is shorter than that of any other shape the researchers have found. (The square wrapper has a perimeter of 5.6569π; the rectangular one, 6π.) So a trefoil wrapper would not only save foil, Demaine and colleagues conclude, it would also be cheaper to cut. The potential reduction in the carbon footprint associated with Mozartkugel materials and manufacturing, they joke, “partially solves the global-warming problem and consequently the little-reported but equally important chocolate-melting problem.”

  7. MATHFEST 2008

    A Royal Squeeze

    1. Barry Cipra


    In 1850, the great German mathematician Carl Friedrich Gauss took a shine to a funky little counting problem: How many ways can eight queens be placed on a chessboard so that no two queens attack one another (i.e., line up horizontally, vertically, or diagonally)? It's not obvious it can be done at all, but it turns out there are 92 solutions. Gauss didn't spot them all, proof in itself that the problem is a bit of a poser.

    Modern computers can easily find all 92, but mathematicians have upped the ante so that even Deep Blue would scratch its silicon head, mainly by making the board larger. There are, for example, 2,207,893,435,808,352 ways of placing 25 nonattacking queens on a 25 × 25 chessboard, a computation completed 3 years ago at INRIA.

    “There's a lot of interesting theory behind these questions,” notes Loren Larson, a chessboard problem expert in Northfield, Minnesota. “They're also nice programming exercises. They're good examples of backtracking algorithms,” also known as depth-first searches.

    In a talk at MathFest, Doug Chatham of Morehead State University (MSU) in Kentucky described a variant he and collaborators have explored, in which pawns are allowed on the chessboard. The pawns interrupt the queens' line of sight, making it possible for more queens to fit on the board. How many more queens, they wondered, do the pawns make possible?

    Chatham and crew—MSU colleagues Gerd Fricke and R. Duane Skaggs, Maureen Doyle of Northern Kentucky University in Highland Heights, Matthew Wolff of Pyramid Controls Inc. in Cincinnati, Ohio, and MSU student Jon Reitmann—have proved that each additional pawn permits an extra queen, provided the board is large enough. For example, with two pawns, it's possible to get 10 queens on a standard 8 × 8 board (see figure). In the current proof, fitting an extra k queens using k pawns on an N × N board requires N to be greater than 25k, Chatham notes, but adds, “We believe the actual minimum sizes are much smaller.”

    Vivat regina.

    Adding pawns makes a classic chessboard problem even more queenly.


    There are no immediate applications for the queens-and-pawns problem, Chatham says, but the original nonattacking-queens problem has found uses in computer science for parallel memory schemes and in statistical physics for particle models with long-range interactions. “We hope to find similar applications for our problem,” he says.

  8. MATHFEST 2008

    Taking the Edge Off

    1. Barry Cipra


    Math has a lot to say about packing things together. The abstract problem of cramming, for example, equal-sized circles into a larger square has applications as far-flung as error-correcting codes for digital communications and the physics of granular materials such as sand. But what if the square has no edges? A quartet of researchers has shown how packing works in such a borderless space.


    The space in question is a torus, a shape like the surface of an inner tube. To topologists, a torus is equivalent to a parallelogram with its opposite edges glued together. On the unfurled, flattened-out torus map, anything leaving on one side immediately reenters from the other, as in many video games. William Dickinson of Grand Valley State University in Allendale, Michigan, and undergraduates Daniel Guillot of Louisiana State University, Baton Rouge, Anna Castelaz of the University of North Carolina, Asheville, and Sandi Xhumari of Grand Valley have spent the past two summers studying circle packings in tori.

    Because a torus has no boundary, the circles are constrained only by one another—just as they would be on a patch of regularly repeating patterned wallpaper. Dickinson and students classified the graphs that can result when lines are drawn connecting centers of tangent circles (red lines in the figure, below), then set to work analyzing which ones lead to the densest packings (i.e., packings with circles of the largest possible radius). For five circles—the first truly challenging case—they found 20 different ways the circles could be arranged on the torus.

    They applied the theory to two particular tori: the “square” torus formed by connecting opposite edges of a square, and the “triangular” torus, which starts from a rhombus with a 60-degree angle. Guillot and Castelaz found the best five-circle packing for the triangular torus last summer (2007), and Xhumari did the same for six circles this summer. Together, the ideas they developed enabled Dickinson to nail down the densest packing for five circles on the square torus. It occupies π/4 or 78.5% of the square torus, as compared with 71.1% on the triangular torus (see figure).


    To cover a torus with circles, researchers studied how to pack them into a square or rhombus whose opposite edges are connected.


    “In general, it is very difficult to prove that a particular packing is optimal,” says Ronald Graham, a circle-packing expert at the University of California, San Diego. Working without boundaries may make proofs easier to come by, he thinks, “but that is just an impression.”


    Investigating the Psychopathic Mind

    1. Greg Miller

    With a mobile brain scanner and permission to work with inmates in New Mexico state prisons, Kent Kiehl hopes to understand what goes awry in the brains of psychopathic criminals


    ALBUQUERQUE, NEW MEXICO—Kent Kiehl remembers his first conversation with a psychopath as if it were yesterday. Kiehl had just started a graduate program in psychology, and he intended to study the criminal mind by interviewing prisoners. His first subject was a thief who'd made a fortune robbing banks in North America and lived the high life for years, renting luxury apartments across Europe and—if he did say so himself—enjoying a great deal of success with the fairer sex. “Have you ever had 15 women in one night?” he asked Kiehl.

    The man was behind bars not because of a heist gone wrong but because one of his girlfriends was cheating on him. He tracked her down at a motel room and burst in with his gun drawn. He shot her lover, but the man managed to get away. The woman later testified against him in court. If he could do it all over again, he told Kiehl, he would have killed them both. Such stories fascinate Kiehl, now an associate professor of psychology and neuroscience at the University of New Mexico and director of Mobile Imaging Core and Clinical Cognitive Neuroscience at the Mind Research Network (MRN) in Albuquerque. “The other 300 or so psychopaths I've interviewed are just as interesting,” he says.

    At age 38, Kiehl is embarking on a project he hopes will unravel the neural basis of psychopathy, a suite of personality and behavioral traits that is far more common in violent criminals than in the general population and is a strong predictor of repeat offenses. Given the crime and other societal costs caused by psychopathic individuals, Kiehl says, this group has been woefully understudied. He intends to change that. With a custom-built mobile magnetic resonance imaging (MRI) scanner—roughly $2.3 million of equipment packed into a 15-meter-long trailer—and permission from the New Mexico governor to work in all 12 state prisons, Kiehl aims to scan 1000 inmates a year.

    “We'll have to see if he gets that much done, but if anybody can do it, Kent can,” says Joseph Newman, a psychologist at the University of Wisconsin, Madison. “He has big ideas, and he pursues them energetically.”

    Kiehl's team conducts hours of interviews with each subject to assess them for psychopathy, substance abuse, and other mental health problems. In addition to functional MRI (fMRI) experiments to investigate neural activity during various tasks, they're also collecting anatomical images of the brain and DNA samples that could eventually be used to search for genetic risk factors—all with the prisoners' full consent and cooperation and all to be used solely for research. Kiehl's research is funded by four R01 grants from the National Institutes of Health, which pay about $900,000 a year in direct costs; MRN paid for the scanner.

    Depending on what he finds, Kiehl's work could raise a host of legal and ethical questions. Could brain scans or blood tests one day improve on the personality profiles and other low-tech methods now used to assess the degree of risk a prisoner poses to society? If so, how should they be used? Could a better understanding of the psychopathic brain alter the way we think about the culpability of certain criminals? Could it point the way to interventions that prevent recidivism?

    We'll never know unless we do the research, Kiehl says: “We just have no idea how their brains are different, how they got that way, and how we might be able to treat the condition.”

    Local boy does bad

    Kiehl's interest in psychopathy goes back to his childhood. He grew up in a middle-class neighborhood in Tacoma, Washington, not far from the boyhood home of serial killer Ted Bundy. While Kiehl was in grade school, Bundy was on a nationwide rampage, killing dozens of young women. Kiehl's father was a newspaper editor at the time, and Bundy's exploits were a common topic of discussion at the family dinner table.

    Bundy exhibited several defining traits of psychopathy. He was cunning and manipulative, often donning disguises or feigning injury to lure women into a vehicle, and his preferred method of killing—crowbar blows to the head—as well as his proclivity for sex with his dead victims suggest a stunning lack of empathy. “Why would someone from my neighborhood end up being so bad?” Kiehl remembers wondering at the time.

    By the time Bundy was executed in Florida in 1989, Kiehl was fantasizing about becoming a professional athlete. He entered the University of California (UC), Davis, that year after being recruited to play on the football team. Solidly built at 6′2″, Kiehl still exudes an athlete's self-confidence. On a recent afternoon, he collected on a $100 bet with his lab manager over how far he could hit a golf ball. “I bet I could hit a ball farther than Tiger Woods,” he boasted.

    When a knee injury forced Kiehl to reconsider his life goals, he recalled his fascination with Bundy and began getting more interested in neuroscience. He rotated through the laboratories of several UC Davis neuroscientists, setting his sights on graduate work with psychologist Robert Hare at the University of British Columbia (UBC) in Vancouver, Canada. Hare is a preeminent psychopathy researcher who in 1980 published the first version of what has become the main tool for diagnosing psychopathy. In its cur rent incarnation, the Psychopathy Checklist-Revised (PCL-R) scores subjects on 20 traits indicative of psychopathy, including callousness, impulsivity, and a history of behavioral problems. People in the general population typically score a four or five on the 40-point scale, Hare says. A score of 30 is widely used as a benchmark for psychopathy.

    Psychopathy is not listed in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, 4th ed. (DSM-IV). The DSM-IV diagnosis of antisocial personality disorder captures some of the external manifestations of psychopathy, including impulsivity and antisocial behavior, but ignores personality traits such as glibness, callousness, and lack of remorse that are scored by the PCL-R. Studies with prison populations have found that roughly 20% (slightly more or less, depending on the security level of the prison) of inmates qualify as psychopaths. Incarcerated psychopaths have committed an average of four violent crimes by the age of 40, Kiehl says. More than 80% of those who are released from prison commit another crime, usually a violent one, within 3 years, compared with 50% for the overall prison population. “Psychopathy is the single best predictor of violent recidivism,” says Kiehl, who hoped to collaborate with Hare to study the brains of psychopathic criminals.

    But Hare wasn't interested in taking him on. “I had a lot of really outstanding students applying to work in my lab, and his grades weren't particularly great,” Hare says. Not one to give up easily, Kiehl launched a campaign that included a barrage of recommendation letters from UC Davis faculty members; he also drove through a snowstorm from Tacoma to Vancouver to hand-deliver a few bottles of California wine that he knew Hare would appreciate. “That did it,” says Hare. “He wore me down.”

    An emotional problem?

    Long before fMRI scanners came along, researchers suspected that psychopathy springs from a defect in emotional processing in the brain. Several of the disorder's signature traits hint at this, as do early studies that found blunted physiological responses—by measures such as heart rate and skin conductance—to emotionally evocative photographs in psychopaths.

    Such abnormalities cast obvious suspicion on the amygdala, the hub of emotion in the brain. In the first fMRI study of psychopathy, published in 2001 in Biological Psychiatry, Kiehl and UBC colleagues found reduced amygdala activity in psychopathic criminals compared with nonpsychopathic criminals in response to emotionally charged words. A malfunctioning amygdala is likely to be one crucial factor in psychopathy, says James Blair, a cognitive neuroscientist at the National Institute of Mental Health in Bethesda, Maryland. Human and animal studies have shown that the amygdala is essential for learning to avoid behaviors with unwanted outcomes, he notes. By preventing children from learning to avoid actions that harm other people, faulty wiring in the amygdala could derail normal social development and contribute to the callous, unemotional traits seen in psychopaths, he proposes. In the June issue of The American Journal of Psychiatry, his research group reports that children with callous, unemotional traits have less amygdala activity than other children when viewing photos of fearful facial expressions.

    Lofty goals.

    Kent Kiehl, shown here at the top of Mount Shasta, plans to study the brains of 1000 inmates a year with his mobile MRI scanner.


    Other researchers question whether the amygdala is really the source of the problem, however. Newman, for example, has long argued on the basis of behavioral evidence that deficits in regulating attention may be the central issue for psychopaths. “Once they start paying attention to some goal they want, they ignore cues that would otherwise activate the amygdala,” he says.

    Kiehl takes an even broader view. He suspects that psychopathy involves disruptions to a network of “paralimbic” regions in the brain's temporal and frontal lobes that contribute to emotion, attention, decision-making, and other cognitive functions. Resolving some of the confusion about which cognitive processes—and which brain regions—are dysfunctional in psychopathy is a major goal of his neuroimaging work in New Mexico.

    But neuroimaging has limitations (Science, 13 June, p. 1412). The behaviors that can be studied inside an fMRI scanner, for example, are necessarily simplified and artificial. Proving that any given neural abnormality that shows up in imaging actually contributes to psychopathic traits and behavior in real life is never easy, says Adrian Raine, a clinical neuroscientist at the University of Pennsylvania. And then there's the chicken-and-egg problem. “Is it leading a violent, psychopathic way of life that causes the structural and functional impairments we find, or is it the other way around?” Raine asks. “It's going to be hard to answer that very important question.”


    On a blazing hot day in late July, Kiehl's mobile scanner was parked inside the gates topped with razor wire at the Youth Diagnostic and Development Center in Albuquerque. From the outside, the mobile resembles any trailer you'd see on an 18-wheeler, albeit cleaner than most. Kiehl spent a year working with engineers at Siemens to design it and ensure that the scanner's magnetic field would remain stable in different locations. Inside, the mobile looks like an ultra-high-tech recreational vehicle. The scanner sits at one end, its magnetic cylinder a pale blue doughnut extending from floor to ceiling. Flat-screen monitors adorn the walls in the adjacent control room, and next to that a small sitting room contains a stack of magazines for the benefit of a corrections officer who waits here while a juvenile prisoner gets scanned.

    All experiments are off-limits to the media, in part because of concerns about the privacy of prisoners but largely because of a bad experience Kiehl had in Canada. A television network broadcast an interview with one of his research subjects that was edited to make the guy seem even scarier than he was, Kiehl says. When the inmate was denied parole a short time later, he threatened to kill any other inmates who participated in Kiehl's research; he also threatened to hit Kiehl with a chair. Now Kiehl says he won't jeopardize his staff by allowing the media to watch experiments or interview inmates.

    Despite the nature of some of their subjects' crimes, Kiehl's students and postdocs say that they've never felt threatened. “They tend to really like us,” says postdoc Matthew Shane. “They enjoy any excuse to talk with someone from outside the prison.”

    In one of the first studies using the mobile scanner, Kiehl's postdoc Carla Harenski and colleagues investigated how the brains of adult male prisoners respond to morally charged photographs, such as an image of a man holding a knife to a woman's throat. The inmates also rated the severity of the “moral violation” depicted in the photographs on a five-point scale. Those who gave high scores, suggesting greater sensitivity to moral violations, tended to have more activity in the superior temporal sulcus, a region implicated in previous studies of moral judgments, the researchers reported at an April meeting of the Cognitive Neuroscience Society. The team has subsequently scanned a bigger sample of prisoners and is investigating whether activity in this and other brain regions differs between those who are psychopathic and those who aren't.

    Neural roots.

    Kiehl suspects that disruptions to paralimbic brain regions (light areas) underlie psychopathy.


    Into the courtroom?

    Such differences in brain activity within prison populations could potentially prove useful in assessing the risk posed by individual criminals, perhaps as a supplement to the PCL-R, Kiehl says. That checklist is currently used in dozens of countries. Depending on the jurisdiction, PCL-R scores are considered during sentencing and parole hearings. Some prisons use them, along with other factors, to determine security measures and treatment options.

    Whether brain scans will ever prove useful in such settings depends on whether they add any predictive power, says Walter Sinnott-Armstrong, a philosopher at Dartmouth College and co-director of the MacArthur Foundation's Law and Neuroscience Project in Hanover, New Hampshire. Not everyone is optimistic. “It's not some sort of crystal ball that's going to tell you who's going to reoffend in 5 years' time,” says Essi Viding, a cognitive neuroscientist at University College London. She also questions the practicality of the approach, given that MRI scans cost $1000 or more apiece and require substantial technical expertise. Even so, research on the neural basis of psychopathy could have important legal implications, says Sinnott-Armstrong. For example, he says, if future research points to a diminished moral capacity due to a neurodevelopmental defect, that could be relevant in court, where a defendant's understanding of the wrongfulness of his actions has a bearing on the verdict.

    Kiehl gets impatient with such hypotheticals. For him, the ultimate question is how best to intervene—ideally, early in life before psychopathic traits become ingrained. The conventional wisdom is that psychopathy is untreatable, but that's based “more on clinical lore than solid research,” says Michael Caldwell, a psychologist at the Mendota Juvenile Treatment Center and the University of Wisconsin, Madison. One widely cited study found that psychopaths who participated in a treatment program in the 1970s actually did worse than those who didn't, Caldwell says. But given that the treatment regimen involved nude encounter groups and LSD, those findings should perhaps be taken with several grains of salt. Kiehl says he's been buoyed by a recent series of papers by Caldwell and colleagues that suggest that targeted interventions, including cognitive behavioral therapy and family counseling, with juvenile offenders with psychopathic traits can prevent future crimes.

    Caldwell, Newman, and other veteran psychopathy researchers say that they're encouraged to see Kiehl's project getting off the ground because public support and funding for psychopathy research has been hard to come by in the past. “If someone is cruel and always out for himself, it's not something that engenders sympathy, concern, and the desire to understand it,” says Newman. “My view is that it's a really important disorder that needs to be understood.” Kiehl says he couldn't agree more.


    The Overture Begins

    1. Adrian Cho

    Next week, physicists at the European particle physics lab, CERN, will fire up the world's biggest atom smasher. Expectations are skyhigh, but discoveries may still be years away

    Ring of fire.

    The new machine will smash particles at energies seven times the previous record.


    Fourteen years ago, scientists at the European particle physics laboratory, CERN, near Geneva, Switzerland, had only plans for a new highest energy particle smasher. Now, thanks to the efforts of thousands of people, they have a gargantuan machine, the $5.5 billion Large Hadron Collider (LHC), which stretches through a 27-kilometer ring of tunnel between Lake Geneva to the east and France's Jura Mountains to the west (Science, 23 March 2007, p. 1652). “It seemed like an enormous mountain to climb, that's for sure, back when we didn't have even a single magnet,” says CERN's Lyndon Evans, who has led the project since its inception.

    Evans says he's had moments of despair. In 2004, a manufacturing error forced workers to rip out and rebuild 3 kilometers of the high-tech plumbing that carries frigid liquid helium to the accelerator's superconducting magnets. In 2002, cost overruns led officials to delay the completion of the LHC by a year. But now, as researchers test the LHC's myriad subsystems, “it really feels like an old friend,” Evans says. “It acts exactly like it is supposed to act.” Physicists around the world hope their ami, the most complex scientific apparatus ever built, continues to behave next week when, for the first time—provided that lawsuits do not force a delay (see sidebar, p. 1291)—researchers try to circulate particles through its twin rings.

    In the quest to unravel the universe's inner workings, the 10 September start-up of the LHC marks the beginning of a new age of exploration. The collider should bag the long-sought Higgs boson, the missing link in physicists' “standard model” of the known particles and the one thought to give the others their mass. It could glimpse a slew of new particles, such as those predicted by a scheme called supersymmetry, or even reveal new dimensions of space. Other colliders hammered out how the standard model is structured; the LHC should answer deeper questions about why the model is as it is, says Gordon Kane, a theorist at the University of Michigan, Ann Arbor. “The LHC is a ‘why’ machine,” he says.

    But answers most likely won't come right away, cautions CERN's Peter Jenni, spokesperson for the 2500-member team working with the 25-meter-tall, 45-meter-long ATLAS particle detector—one of four big detectors the LHC will feed. “People should definitely not take it for granted that big things will happen immediately,” he says. If all goes well, the LHC will start smashing particles in October, and oddities could jump out right away. More likely, it will take a few years for the LHC to clinch the discovery of the Higgs or something even stranger. Still, after 3 decades in which the standard model has answered every question asked at particle accelerators, physicists are eager to see something really new.

    What a blast!

    In this simulation, a Higgs boson is born and decays inside the ATLAS particle detector.


    First off, look for something old

    Like all colliders, the LHC aims to produce fleeting bits of matter not seen in the everyday world. As Einstein noted, energy and mass are equivalent. So physicists can generate heavier exotic particles by smashing known ones together with sufficient energy. Blasting protons into protons at energies seven times as high as the previous record, the LHC could cough up new particles more than 1000 times as massive as a proton. But first, researchers will simply search for familiar standard-model particles.

    Ordinary matter consists of particles called “up quarks” and “down quarks,” which combine to make the protons and neutrons in atomic nuclei; the electrons that make up the rest of the atom; and wispy particles called neutrinos that emerge in a particular type of nuclear decay. This first family of particles is flanked by two sets of heavier, unstable relatives. That means there are also strange and charm, top and bottom quarks; the electron has beefier cousins called the muon and the tau lepton; and there are two more “flavors” of neutrinos. Still other particles convey forces: Photons carry the electromagnetic force, the massive W and Z bosons convey the weak nuclear force, and gluons make up the strong nuclear force that binds protons and neutrons.

    Tracking such familiar particles will enable experimenters to calibrate their immensely complex devices, says Tejinder “Jim” Virdee, a physicist at Imperial College London and spokesperson for the 2900-member team working with the 12,500-ton CMS particle detector, ATLAS's rival. (The LHC's two other detectors, ALICE and LHCb, won't search directly for new particles but will do more specialized work.) For example, a Z boson can decay into a muon and an antimuon, so by studying Z's physicists can measure how well they spot those particles.

    Such studies also set the baselines from which to search for something new, Virdee says. “If you see something [unusual], the first question everyone is going to ask is, ‘Do you also see the other things you expect?'” he says. “You have to be able to say ‘yes’ before you can claim anything new.” The LHC should produce a smidgen of data between October and December, when it will shut down for the winter, and experimenters will use it primarily to “rediscover” the standard model.

    The Higgs: Wait a couple of years

    Of course, experimenters will also keep an eye out for new things, such as the elusive Higgs boson. That oddball particle solves a serious problem with the standard model: The theory goes mathematically haywire unless particles have no mass. The “Higgs mechanism” sidesteps that problem by generating mass through the interactions of the otherwise massless particles themselves. It assumes that empty space is filled with a field a bit like an electric field that drags on particles to give them inertia, the essence of mass. Just as an electric field is made up of photons, the Higgs field consists of particles—Higgs bosons—that can be ripped out of the vacuum.

    But finding the Higgs may not be easy. It all depends on how much the particle weighs, says Karl Jakobs, a physicist at the University of Freiburg in Germany and physics coordinator for ATLAS. The standard model does not predict how heavy the Higgs should be. If it weighs between about 200 and 500 times as much as a proton, then it should stick out fairly clearly. In that case, experimenters might collect enough data to find it by the end of 2009, Jakobs says, although analyzing the data could take months longer. But previous searches and indirect inferences suggest that the Higgs is lighter—definitely more than 121 times as massive as a proton but probably less than 170 as massive as that benchmark. If the Higgs is that light, then it could take until 2012 or later to find it.

    The difference is that if the Higgs is heavy enough, it should decay in a distinctive way—into two hefty Z's that both decay into a muon and an antimuon. But if the Higgs is too light for that, then researchers will have to look for it decaying into combinations such as a pair of photons. So many photons will be produced in a typical LHC collision that sifting out the Higgs's signal from the clutter will take lots of data.

    Most physicists say that they are sure to find the Higgs or something even weirder, because without it the standard model again breaks down mathematically at the energies the LHC will reach. Ironically, finding only the Higgs boson would disappoint many, as it would leave physicists nothing to puzzle over. “The worst scenario for me is that you start running and you see no evidence of deviations from the standard model, and after 2 or 3 years you see evidence of a standard-model Higgs and nothing else,” Jakobs says. The Higgs would be the last brick in the standard model. It alone would leave physicists facing a conceptual wall and could signal the end of the field.

    Spotting signs and nailing discoveries

    Most physicists expect to find much more at the energies the LHC will explore. New forces might emerge, or quarks themselves could turn out to consist of other particles. More speculatively, space may have additional curled-up dimensions that might be pried open, or the LHC might make tiny black holes, which would tie together the realms of quantum mechanics and gravitational physics.

    Perhaps the most favored idea is supersymmetry, a scheme that posits a heavier, unobserved “superpartner” for every particle in the standard model. Seemingly profligate in its complexity, supersymmetry would help solve a number of fundamental, albeit esoteric, problems in the standard model. For example, it helps unify the three forces in the theory, a prerequisite to formulating a theory in which all forces, including gravity, are different manifestations of a single master force. Supersymmetry might also provide the mysterious dark matter whose gravity holds the galaxies together, as the least massive superpartner would be a heavy particle that would interact with ordinary matter essentially only through its gravity.

    Supersymmetry might be very easy to see at the LHC, some say. “We predict a signature that they could see with five events,” says Michigan's Kane. “They could see it in the first week of running in October.” Generally, collisions producing the undetectable least massive supersymmetric particle would look lopsided, with a spray of ordinary particles shooting out one side of the particle detector and the supersymmetric particle zipping out the other side without leaving a trace.

    But although spotting those events may be easy, proving that they're evidence of supersymmetry and not something else may be tough, says CERN's Paraskevas Sphicas, physics coordinator for CMS: “The catch is that the signature is so complex that we would have to do a lot of analysis to understand it.” In fact, clinching the case for supersymmetry could take several years.

    First, however, physicists must get the machine up and running. Researchers have already succeeded in injecting protons into each of its countercirculating rings. On 10 September, they'll try to coax the beams all the way around the rings at a very low energy. They'll then aim to increase the beam energy to 70% of the ultimate goal and the beam intensity to 1/1000 the design standard before beginning collisions in several weeks' time. Next year, the LHC should smash a billion particles each second at full energy.

    The first collisions will mark the beginning of the real fun for experimenters. Some say that, although they have some pretty good ideas, they don't really know what to expect. “I want surprises,” says CERN's Maria Spiropulu, an experimenter working on CMS. She may well get them, although she and her colleagues may have to wait just a bit longer.


    Researchers, Place Your Bets!

    1. Adrian Cho

    The days before the start-up of the Large Hadron Collider (LHC) should be filled with quiet contemplation and reverence for the adventure to come, says physicist Maria Spiropulu. “Now is not the time to speculate,” says the experimenter at the European particle physics lab, CERN, near Geneva, Switzerland. “We should be silent and respectful and wait for the data to come.”

    Or not. Many physicists seem to think that now is precisely the time to guess at what CERN's great particle smasher might find. And some are even willing to put their money where their favorite theoretical models are and wager on their expectations.

    Tommaso Dorigo, an experimenter at the University of Padua in Italy, doubts that the LHC will find evidence of supersymmetry, a theoretical scheme that predicts a massive “superpartner” for every known particle in physicists' current “standard model.” In the past 10 or 15 years, extremely precise measurements of standard-model particles have indirectly undermined the viability of the notion, Dorigo says. “I realized I don't believe in the thing,” he says. Dorigo has bet $1000 with two other physicists that, after the LHC has accumulated a certain amount of data, it will see no sign of supersymmetry.

    High rollers.

    Tommaso Dorigo (below) wagers that the LHC will see nothing new. Jacques Distler disagrees and expects to pocket $750 of Dorigo's money.


    More precisely, Dorigo has bet that the LHC will see no clear deviations from the standard model of any kind, explains Jacques Distler, a theorist at the University of Texas, Austin, who has $750 of the action. Like a calculating professional gambler, Distler says he took the bet because it is so open-ended that he likely can't lose. “History has always been, you explore a new energy range and you see something new,” he says.

    For some, not having a bet bespeaks the strength of their predictions. Gordon Kane, a theorist at the University of Michigan, Ann Arbor, says he would gladly wager that the LHC will find supersymmetry, but “nobody I know will bet against it.” Stuart Raby, a theorist at Ohio State University in Columbus, also says he can't find anyone who will take such a bet. To which Distler says, “I wonder how hard they tried.”

    The general public can get into the game, too. Online gambling sites and prediction exchanges such as,, and are taking bets on when the Higgs boson will be discovered, whether the Tevatron collider at Fermi National Accelerator Laboratory in the United States will see it first, and related questions.


    Bracing for a Maelstrom of Data, CERN Puts Its Faith in the Grid

    1. Daniel Clery

    Researchers have hammered out new networking tools to store the LHC's instrument readings and make them available to physicists worldwide

    After the Large Hadron Collider (LHC) powers up next week, the physicists and engineers who built the machine and its detectors won't be the only ones nervously waiting for its two beams to collide for the first time. Just as anxious will be the researchers charged with taking the flood of data that the LHC will produce and processing it, storing it, and making it available for physicists to study the world over. The LHC is expected to produce 15 petabytes (15 million gigabytes) of data every year. Dedicated fiber-optic lines have been laid down to whisk the data away from CERN to some 250 other physics labs in 50 countries worldwide, where about 100,000 PC processors are ready and waiting to receive them.

    First stop.

    CERN's computers form the central node in a global data-sharing network.


    At the beginning of this decade, CERN's information technology (IT) department decided to handle the LHC's torrent of data using a novel computer architecture known as a grid. A grid is a way of using the Internet, just as the World Wide Web and e-mail are. But the technology has not developed as fast as particle physicists had expected. CERN researchers believe they have ironed the wrinkles out of their system, dubbed the LHC Computing Grid (LCG), but nagging doubts remain.

    “By an order of magnitude, this is the biggest grid [yet assembled],” says John Gordon, deputy director of GridPP, the United Kingdom's contribution to the LCG. “I'm reasonably confident that the grid is ready for data.” But Les Robertson, head of the LCG project from its inception in 2001 until the beginning of this year, adds a note of caution: “It's very difficult. There's no real data, and real users are not active. A live test will only come when [real] data starts to flow.” He adds: “This is what we will use. There's no fallback.”

    Fifteen petabytes is an enormous amount of data. To store it all on CDs would require a stack of disks 20 kilometers high—more than four times the height of Mont Blanc, Europe's tallest mountain. When CERN's IT experts began planning how to handle data from the LHC in the late 1990s, they soon realized that it would not be feasible to do it all at CERN. It wasn't clear that Geneva's electricity supply could power enough computers to do the job, and in any event, CERN couldn't afford them: All of the LHC budget was being spent on the machine itself. “It was easier to get resources that were already available at computer centers,” says Robertson.

    Trickle down.

    Beginning as a way for hundreds of physics labs to divide the work of processing and archiving LHC data, the global “grid” evolved into a universal tool kit for particle physicists to share and study results.


    At first, the CERN team set about designing an architecture in which, as the LHC detectors churn out data, the information would be archived in its raw state at CERN while simultaneously being streamed out to 10 or so large physics labs elsewhere in the world. At these tier-1 centers, some processing of the data would be done; then it would be archived again and some data would be farmed out from each tier-1 center to 10 or 20 tier-2 centers. In this way, the work of processing and archiving data is shared among particle physics labs around the world. The scheme would have worked, but it lacked flexibility, and the researchers soon heard about something better.

    In the mid-1990s, Ian Foster and Steven Tuecke of Argonne National Laboratory in Illinois and Carl Kesselman of the California Institute of Technology in Pasadena had devised the idea of a grid. Whereas the Web is essentially a system for moving data around with limited processing for tasks such as searching, a grid aims to share everything: processing power, storage, scientific instruments, simulation, and so on.

    It is called a grid in a deliberate analogy to the electricity grid. When you plug a toaster into a socket, you don't know how the electricity was produced or how it traveled to you. Similarly, with a computing grid, a researcher can use a standard PC interface to request a job to be done. Then a program known as middleware takes over, marshalling resources from multiple sites across the Internet. Raw instrument readings may be taken from a database in Europe and processed by a supercomputer in the United States; the manipulated data may be stored in China and then put through a visualization program in Japan before being returned to the researcher—who sees only the results, not the journey that got them there.

    A major difficulty in setting up a grid arises from the “firewalls” that institutions erect to protect their computers from unauthorized access. It's a challenge getting the differing architectures and security arrangements of all the institutions in a grid to work together and trust one another. Each job travels around with “certificates” confirming that the user who requested the job has the authority to use the resources. To make grids work, there are “many hurdles, social, political, and technical,” says particle physicist David Britton of the University of Glasgow in the U.K.

    As particle physicists learned more about grids, the CERN team decided that the approach offered a more flexible way to handle the LHC's mountain of data, says Robertson. Although the same basic layout of tier-1 and tier-2 centers remains, it is no longer a rigid structure like the spokes of a wheel, with users tied to their local tier-1 center. The production process of disseminating the LHC data is handled by the grid, and researchers can get hold of the data they want without knowing where they are or what passwords they need to get access to them.

    Since that decision was made in 2001, dedicated high-speed fiber-optic lines have been built between CERN (tier-0) and the tier-1 centers. Beyond that, the normal Internet provides the infrastructure. Particle physicists in each participating country have built up the LCG with funding from their respective governments for computer resources to add to the grid. Within the European Union, national grid efforts for research have been linked to form the Enabling Grids for E-Science (EGEE), which forms the backbone of the LCG in Europe. That role is performed in the United States by the National Science Foundation's Open Science Grid. Other smaller grids, such as GridPP, Scandinavia's NorduGrid, and Italy's INFN Grid, have also been woven into the LCG.

    In February and May this year, researchers carried out two major trials of the system, sending simulated data from the LHC detectors themselves through CERN's tier-0 hub out to tier-1s and tier-2s. Britton says the February test was “quite successful, … better than we hoped,” although they managed only a couple of days running data from all four detectors simultaneously. Much fine-tuning was done before the May dry run, and as a result they ran the four detectors together for the entire month. “We tested the whole chain, and most things stood up,” says Gordon. Some bits of software didn't behave as expected, he says. In addition, the tier-1 center at Amsterdam had trouble keeping its computers cool, while the U.K. tier-1 at Rutherford Appleton Laboratory near Oxford suffered a small power failure. “But it was successful because we caught up,” Gordon says.

    Researchers say that almost all the computing resources needed for full LHC operation are now in place, and they are confident that the production side of the operation—transmitting, processing, and storing LHC data as it's produced—will go as planned. The thing that still gives them the odd sleepless night is what will happen when the LHC starts producing some interesting physics. Suddenly, thousands of physicists across the globe who have patiently waited years for these data will log onto the grid and request jobs. Grid experts refer to such use as “chaotic” because of its unpredictability. “It's definitely an unknown still,” says Gordon. Britton agrees. “It will be a challenge to the grid because there will be a large number of less expert users,” he says. “We'll have to learn how to help users in this type of environment.”

    LCG researchers were surprised that it has been this hard to develop the grid. At the outset, they expected it to evolve as the World Wide Web did: After CERN invented it, industry took the ball and now provides the Web as a service to researchers and the public alike. Although some companies, including Amazon, are starting to provide gridlike services commercially—the buzz phrase is “cloud computing”—the LCG researchers had to develop much of the new system as they went along. “We hoped the grid would be a service by now, but it hasn't happened,” says Tony Cass, head of fabric infrastructure and operations in CERN's IT department.

    Britton acknowledges that it was a risk going down the grid route, but he says the particle physics community looked at the technology, assumed it would develop, and assumed they could make it work in the time available—just as they did with the rest of the LHC. “That's exactly what particle physicists have to do: push things beyond the current envelope.”


    Is the LHC a Doomsday Machine?

    1. Daniel Clery,
    2. Adrian Cho

    Even for a car ad, the pitch on the radio was hard to ignore: an “end-of-the-world sale,” offering a 30% discount and $1000 cash back on new automobiles. “Buy yourself something really frivolous,” it urged. The reason: Miniature black holes created by the Large Hadron Collider (LHC) might soon touch off an unstoppable chain reaction that would blow up Earth.

    Brad Benson, the New Jersey Hyundai dealer behind the ad, isn't really worried about the fate of the planet. “I'm a National Geographic kind of a guy,” he told Science. “I love reading about this kind of stuff.” But as the $5.5 billion particle smasher prepares to carry its first beam next week, some people see the machine as a threat. A handful of physicists and others have proposed an array of dangerous entities that could be created in the minuscule fireball of a particle collision—including microscopic black holes, strange matter that is more stable than normal matter, magnetic monopoles, a different quantum-mechanical vacuum, and even thermonuclear fusion triggered by a stray beam. Discussion forums on the World Wide Web sizzle with rants against arrogant scientists who meddle with nature and put us all at risk. And a few groups have sued to stop the LHC.

    In March, Walter Wagner, a nuclear physicist based in Hawaii, and Luis Sancho filed for a restraining order and injunction against the LHC in the U.S. District Court of Hawaii. This week, Wagner is due to appear to fight a motion from the U.S. Department of Energy to dismiss the case.

    Meanwhile, late in August a European group filed a complaint with the European Court of Human Rights (ECHR) for an emergency injunction to halt the switch-on. On 29 August, after 3 days of deliberation, the court declined to grant the injunction. An ECHR spokesperson says the plaintiffs can continue to pursue the complaint, but given the number of cases on the court's files it may take as long as 3 years to decide on its admissibility alone. “The only serious solution is not even to start the [LHC] project,” says Markus Goritschnig, spokesperson for the ECHR complaint. “We will continue the case,” he adds.


    The LHC is not the first particle collider to face campaigns over its safety. In 1999, Wagner sued to stop the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory in Upton, New York. The case was dismissed in 2000, and RHIC began operating the same year. To forestall similar campaigns against the LHC, which part of the time will collide heavy ions at even higher energies, CERN commissioned five independent physicists and one CERN staffer to assess the dangers of the new machine. Their conclusion, published in 2003: “We find no basis for any conceivable risk.” A second panel, the LHC Safety Assessment Group (LSAG), came to the same conclusion in a report published this June.

    The doom mongers do have one thing right: The LHC just might create black holes. According to Einstein's theory of general relativity, energy warps space and time. So by smashing protons together with unprecedented vigor, the LHC might cram enough energy into a small enough volume to create pinholes in the universe—miniature black holes. If space has three dimensions, even the energies reached by the LHC will be about a million billion times too low. However, string theory—which assumes that every fundamental particle is in fact an infinitesimal vibrating string—predicts that space has more dimensions curled into tiny loops. If some of them are curled loosely enough, then the energy threshold may tumble to within the LHC's reach, some theorists have argued.

    But such tiny black holes should quickly evaporate into ordinary particles. At the least, they must be able to decay back into the particles that created them. They should also decay through “Hawking radiation,” which comes about when, thanks to quantum uncertainty, a particle-antiparticle pair pops out of the vacuum and one partner falls in the hole while the other shoots outward.

    LHC opponents point out that no one has ever observed Hawking radiation, and they fear that the black holes will grow and gobble up more and more matter. German physicist Rainer Plaga, in a paper cited in the ECHR complaint, theorizes that black holes could both grow and radiate intensely, doing as much damage through radiation as they do by eating everything in sight. In another cited paper, Otto Rössler, a theoretical chemist at the University of Tübingen in Germany, begins with an unusual—and, physicists say, wrong—interpretation of general relativity to argue that minuscule black holes should be stable and may form tiny radiation-spewing quasars.

    All those scenarios are based on dodgy reasoning, says Jonathan Ellis, a theorist at CERN. Besides, he says, Earth, the sun, and other celestial bodies are constantly bombarded by cosmic particles with energies far higher than the LHC will reach. As the LSAG noted in its report: “This means that Nature has conducted the equivalent of about a hundred thousand LHC experimental programmes on the Earth already—and the planet still exists.”

    Physicists may have unwittingly helped foment panic by talking too glibly about black holes, Ellis notes. “Maybe we should be more careful with our rhetoric,” he says. “For example, we talk about recreating the big bang, and people think, ‘Oh my God, they're going to recreate the big bang!’” Of course, physicists don't aim to literally return the universe to its fiery birth, just to mimic those conditions in fleeting particle collisions. Alas, that less sexy line isn't going to catch anyone's attention, as any good car salesman can tell you.