News this Week

Science  25 Jan 2008:
Vol. 319, Issue 5862, pp. 394

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution


    Dust Storm Rising Over Threat to Famed Rock Art in Utah

    1. Keith Kloor*
    1. Keith Kloor is a senior editor at Audubon Magazine.

    For more than 1000 years, geometrical human figures, animals, and abstract designs have graced the sandstone walls of Nine Mile Canyon in central Utah. Considered one of the premier rock art sites in North America, the canyon holds at least 10,000 images pecked and painted by the mysterious Fremont and later the Ute Indians.

    Now a much-anticipated study just submitted to the U.S. Bureau of Land Management (BLM) warns that truck traffic from nearby oil and gas operations could be fading the splendor of the world-renowned rock art. “The results of my study are very alarming,” says report author Constance Silver, an art conservator with Preservar Inc. in Brattleboro, Vermont.

    The report, due to be released in a week or two as part of an Environmental Impact Statement (EIS) on expanding oil and gas operations in the canyon, is likely to kick up a furious dust storm of its own. BLM managers say they are not convinced that the current operations are causing serious damage. “Obviously, the dust is having an impact on the visual clarity of the rock art. But whether the dust is having a [lasting] impact is open to question,” says archaeologist Julie Howard of BLM in Salt Lake City.

    Big 18-wheel rigs have been rumbling through Nine Mile Canyon since 2004, when BLM gave energy companies the go-ahead to drill for natural gas higher up in the plateau. The decision had outraged some archaeologists because the art sits just adjacent to the canyon's main, coarsely graded road.

    Silver's report is the first to study the effects of the traffic and the dust it creates. One of the few conservators who specializes in rock art, she was commissioned by BLM officials in Utah last year to assess the impacts. She worked in the canyon last April, recording the amount of particles in the air before and after trucks passed by. She also collected particulate samples in heavily trafficked areas and in sparsely visited side canyons (for control). She completed her report late last year and described her results to Science earlier this month.


    Once-vivid rock art panels like this one of a two-headed snake in Nine Mile Canyon (top) are shrouded by dust in 2006.


    Ironically, Silver found that the chief danger comes from an effort by the Bill Barrett Corp. and other energy companies that use the road to suppress dust: They have repeatedly applied magnesium chloride to the dirt road. This salt damps dust by pulling moisture out of the air. But Silver says the chemical is “flying all over the place” along the edges of the road and settling on the pictographs: “You can see the deposition taking place” on the art.

    Magnesium chloride is “vicious stuff,” says Silver. “It peels concrete.” Over time, she says, the salt will corrode the rock and damage the paintings on its surface.

    But BLM managers familiar with Silver's study were hesitant about its conclusions. “Nine Mile is very controversial,” says Roger Bankert, BLM field manager in the Price, Utah, office, who helped draft the soon-to-be-released EIS. “There could be extremist views on both sides. Some say there's a lot of damage, and some say there's no damage.” Bankert suggested that additional analyses might be in the works. “We could have other specialists disagree with [Silver's report],” he said.

    The use of magnesium chloride in Nine Mile Canyon as a dust suppressant has been “a concern for a long time” among some BLM staffers, says Dennis Willis, a BLM recreational planner in the Price office; some are also concerned that the salt is contaminating the canyon's stream. Although Silver's is the first study to suggest a magnesium chloride problem in Nine Mile, existing data suggest that the compound, also used as a road deicer, is a corrosive agent. Bankert points out that Carbon County supervises the use of the road, and county officials, not BLM, approved the magnesium chloride use.

    Scientists familiar with the level of truck traffic on the canyon road say they are not surprised by Silver's findings. “The fact that the dust is being kicked up on the rock art panels is apparent to anyone who goes down there,” says Kevin Jones, Utah's state archaeologist.

    Some experts say it is inevitable that the dust buildup will cause damage. “Think of a painting in your house that is placed over a fireplace that produces soot,” says chemist Marvin Rowe of Texas A&M University in College Station, who works on dating rock art. “Over time, that soot gets incorporated into the mineral content of the painting, and it builds a thick enough coating where it makes the painting fade away.”

    One option might be to wash the art, although some experts fear damage from washing, too. Silver predicts some action will be taken: “They're really going to have to do something about the road and clean up those sites.”


    A Plan to Capture Human Diversity in 1000 Genomes

    1. Jocelyn Kaiser*
    1. With reporting by Hao Xin.

    It's sign of how fast horizons are changing in biology: Researchers who only a few years ago were being asked to justify the cost of sequencing a single human genome are now breezily offering to sequence 1000. And they say they can do it in a flash. Over the next 3 years, an international team plans to create a massive new genome catalog that will serve as “a gold-standard reference set for analysis of human variation,” says Richard Durbin of the Sanger Institute in Hinxton, U.K., who proposed the project just last year.

    The 1000 Genomes Project, as it's called, will delve much deeper than the sequencing of celebrity genomes, three of which were completed last year. It will help fill out the list of new genetic markers for common diseases that came out in 2007, says Francis Collins, director of the U.S. National Human Genome Research Institute (NHGRI) in Bethesda, Maryland. At the same time, new technologies will be put to the test, and researchers will work out how to handle a growing deluge of data. Such practical advances will be needed a few years from now when sequencing entire genomes will be routine, notes population geneticist Kenneth Weiss of Pennsylvania State University in State College, who is not part of the project. “This seems overall like a next logical step,” he says.

    The search for disease genes took off last year, building on the first human genome reference sequence in 2003 and the subsequent HapMap. The latter describes how blocks of DNA tagged by common variants, called single-nucleotide polymorphisms (SNPs), vary in different populations. These SNPs have turned up more than 100 new DNA markers associated with common illnesses such as diabetes and heart disease (Science, 21 December 2007, p. 1842). But the HapMap includes only the most common markers, those present in at least 5% of the population.

    More is better.

    Researchers aim to acquire DNA data from 1000 individuals.


    To find rarer SNPs that occur at 1% frequency, genome leaders say, they need to sequence about 1000 genomes. According to a plan hammered out by about three dozen experts last year, the project will take advantage of new technologies that have slashed the cost of sequencing. The work will be done by the three U.S. sequencing centers funded by NHGRI, the Sanger Institute, and the Beijing Genomics Institute (BGI) in Shenzhen, China.

    Because the technologies are so new, the consortium will start with three pilot projects. One will exhaustively sequence the entire genome of six individuals: two adults and both sets of their parents. DNA in these six genomes will be analyzed repeatedly up to 20 times to ensure almost complete coverage. A second project will sequence 180 individual genomes at light (2×) coverage, leaving gaps. The third project will be to fully sequence (20× coverage) the protein-coding regions of 1000 genes (5% of the total) in about 1000 genomes. The samples, all anonymous and with no clinical information, will mainly be drawn from those collected for the HapMap, which includes people of European, Asian, and African descent.

    The pilots should take about a year and will put the new technologies to a “very vigorous test,” Collins says. After that, the consortium will decide what coverage to use to sequence the entire set of 1000 genomes. Most of the project's $30 million to $50 million price tag will be paid from the existing sequencing budgets of institutes, organizers say.

    The new catalog could help disease gene hunters in several ways. It may allow researchers simply to hunt through an index for a SNP in a particular location that alters a gene product rather than run a time-consuming sequencing project, Collins says. The project will also catalog genes that are sometimes lost or duplicated; such copy-number variants can cause disease. By compiling rarer variants, it should also help resolve a debate about the relative contribution of these mutations to disease risks. “There's no question it's going to be a tremendous resource,” says Yale University's Judy Cho, who has used the HapMap to find a new gene for Crohn's disease.

    China is also launching its own human genomes project. BGI Shenzhen this month announced that it is seeking 99 volunteers who will help pay to have their genomes sequenced as part of a study of diversity (Science, 26 October 2007, p. 553). The 3-year effort, called the Yanhuang Project after the Yan and Huang tribes that are believed to be ancestors of modern Chinese, will overlap with the 1000 Genomes Project. With proper consent, some volunteers' genomes will be sequenced for both efforts, says Wang Jun, director of BGI Hangzhou.

    In a parallel effort, J. Craig Venter of the J. Craig Venter Institute in Rockville, Maryland, says his team will sequence up to 10 individuals this year and publish the data along with medical information. Venter—who dismisses the 1000 Genomes Project as “more survey work” because not all genomes will be sequenced to great depth—has even bolder plans. He says he aims for “complete diploid genome sequencing” of 10,000 human genomes in the next decade. Still, he says, “it's great that there's such an expansion of things.”


    Max Planck Accused of Hobbling Universities

    1. Gretchen Vogel

    International university rankings tend to give German schools an inferiority complex. In the latest worldwide assessment from Shanghai Jiao Tong University in China, no German university made it into the top 50. The government hopes to change that, pouring €1.9 billion ($2.8 billion) into an Excellence Initiative that is supposed to boost a few schools to world-class status (Science, 20 October 2006, p. 400). But this month, a group of respected researchers charged in a newspaper article that the problem isn't money, it's the country's Max Planck Society, which plucks many of the country's leading researchers out of universities into its own institutes. The missive has sparked a discussion in the press and the research community about how the country can best burnish its international reputation.

    In an article headlined “The Unsolved Max Planck Problem,” nine scientists, including Nobel laureate Günter Blobel of Rockefeller University in New York City, said that the Excellence Initiative is hopeless as long as the country's Max Planck Society skims off all the top talent. The scientists, writing in the 8 January Frankfurter Allgemeine Zeitung, said that Germany's two-tiered research system not only lures the best brains away from universities but also leads to a mismatch between the most promising graduate students and the best labs. They say the society's institutes should be merged into nearby universities, and its researchers should become professors with all the attendant privileges and responsibilities.

    With its annual budget of €1.4 billion ($2.1 billion), the Max Planck Society funds 80 institutes, each focusing on a specific area in the natural or social sciences. Some institutes have just two directors—the position equivalent to a full professor—whereas the biggest has 10. Most are located in cities that have universities, and many do cooperate closely with local colleagues. Graduate students working in Max Planck labs, for example, must be enrolled in a partner university, from which they receive their degree. But too often, says Widmar Tanner, a biologist at the University of Regensburg who initiated the article, Max Planck Institutes and universities are in direct competition. “The elite university programs cannot work as long as you have this competitive structure called the Max Planck,” he says.

    Brain drain?

    Some researchers say Germany's Max Planck Society leaves universities without top talent.


    The training of Germany's young talent also suffers, say Tanner and his co-authors. Although some Max Planck researchers do teach, their contact with students is less than a university professor would have. That leads to a disconnect between the most promising young researchers and top mentors, the authors charge. The problem is exacerbated, they say, because Germany has no standardized test like the American Graduate Record Examination, so graduate students are largely recruited through personal contacts. Without the personal contacts, Max Planck researchers are at a disadvantage when hiring. Max Planck labs “get very good postdocs, but the young, fresh graduate students? At best average,” Tanner says.

    The solution, the authors say, would be to integrate Max Planck institutes into their local universities, adopting a system more like that of the Howard Hughes Medical Institute in Chevy Chase, Maryland, where selected researchers receive extra funding but remain employed by their host university—and lend it their renown. Such ideas have been proposed before, the authors acknowledge. They even quote Nobel laureate Max Delbrück criticizing the Max Planck's forerunner, the Kaiser Wilhelm Society, “because it takes the best people out of teaching and impoverishes contact with students.” Even so, Tanner says, criticism of the Max Planck today “is taboo” in German political circles. “We wanted to start a discussion,” he says.

    The authors “may have good intentions, but [their proposal] is not the solution,” says Max Planck President Peter Gruss. The numbers just don't add up, he says. The society's 260 directors “will not suffice to change the university system,” and the society's entire budget is less than that of Stanford University—often cited in Germany as the kind of elite school the country lacks.

    “Those who desire integration do not comprehend the concept of a Max Planck institute,” Gruss says. The society's purpose is to be flexible enough to fund cutting-edge research across all fields, and its institutes are ultimately temporary. “When I was a student studying virology, Max Planck closed a virology institute in Tübingen and opened one in developmental biology. Thank God we did, because we got Christiane Nüsslein-Volhard to work there.” In 1995, Nüsslein-Volhard won the Nobel Prize in physiology or medicine. “We try to get the best person in a given field. If we don't get the best person, we change direction and find a new field.” A university does not have the same freedom to drop old fields and pick up new ones, he says. Merging Max Planck into universities would “remove a successful system that by any measure is in the top tier of institutes worldwide.”

    Gruss and others point out that the Excellence Initiative has encouraged new cooperation between universities and Max Planck. Most of the schools that won funding in the nationwide competition did so by developing so-called Centers of Excellence or Graduate Colleges that bring together researchers from the university and a neighboring Max Planck institute. The programs “have broken down the alleged divisions and led to many close collaborations that play on the strengths of both partners,” says Matthias Kleiner, president of Germany's main funder of research grants, the DFG.

    German universities that want to make themselves world-class can learn from another of Max Planck's key strengths, Gruss says: “What you need to do is to give some people more and take it from others. The universities over the last decades were not prepared to do that.”


    France Launches Public Health School à l'Anglo-Saxonne

    1. Martin Enserink

    RENNES, FRANCE—If imitation is the sincerest form of flattery, schools of public health in Britain and the United States should feel pleased. France has just created a new institute, the first of its kind in France, that takes its inspiration from the Harvard School of Public Health, the London School of Hygiene and Tropical Medicine, and other famous Anglo-Saxon institutes. Its goal: to give France, currently a bit of a laggard in public health research and education, an institute that can compete with the world's best.

    The new French School of Advanced Studies in Public Health (EHESP) holds some trump cards: strong political backing, a new master's degree in English to lure anglophone students and teachers, and a dream location in the heart of Paris, next to the Notre Dame cathedral. But some experts say making it a success remains an uphill climb.

    Many countries in continental Europe don't have a tradition of public health schools separate from the faculty of medicine. In contrast, there are 40 in the United States. Public health expertise is particularly scattered in France, says Jacques Bury, a former director of the Association of Schools of Public Health in the European Region who now works for a private consulting company in Geneva, Switzerland.

    To address that situation, France passed a public health law in 2004 that ordered EHESP into existence. It doesn't start from scratch, however. Officially opened this month, the school is an evolution of the National School of Public Health, an “école d'administration” with a €57 million annual budget, in Rennes, 400 kilometers west of Paris. For the past 45 years, it has trained managers and inspectors for France's state-run health care system. The new school—the law gives it the status of a university—will continue that mission but add master's and Ph.D. programs and dramatically expand its research in areas such as epidemiology, information sciences, and health care management, says EHESP dean Antoine Flahault. The existing school, whose research focuses on environmental health and social sciences, will morph into two of EHESP's five departments.

    Grand ambitions.

    Dean Antoine Flahault wants the new public health school to compete with the world's best.


    New construction is planned for the Rennes campus to accommodate those plans. In addition, Flahault has convinced the city of Paris to give EHESP an entire floor in the Hôtel-Dieu, a legendary hospital that occupies “one of the 10 best addresses in the world,” says Flahault. Putting the master's program there should help lure top talent, he says.

    Flahault, 47, an expert in infectious disease modeling at Pierre and Marie Curie University in Paris, is an unabashed admirer of the Anglo-Saxon schools. He plans to apply for accreditation from the U.S. Council on Education for Public Health in Washington, D.C., which so far has accredited only one non-U.S. school. That would be a way to assure that EHESP is doing its job well, and it might help persuade U.S. students and staff to come to France, he says.

    Making the school compete in research at the international level will be a challenge, however, says Yves Charpak, a former policy officer at the World Health Organization who now heads international affairs at the Pasteur Institute in Paris. EHESP does not have a big purse to recruit outsiders; the government, which strongly supports the school, has promised Flahault 12 new professorships, but to become a true science powerhouse, EHESP will need to draw in research teams from universities and institutes such as the biomedical research agency INSERM, which have their own agendas.

    But Flahault is optimistic that the new school will become a magnet. And he hopes to tap other sources of money as well, such as endowed professorships—yet another Anglo-Saxon idea that he plans to copy.


    Got Data Questions? NSF's Indicators Has (Most of) the Answers

    1. Jeffrey Mervis

    Few would argue with Steven Beering, chair of the oversight board for the U.S. National Science Foundation, when he asserts that NSF's biennial Science and Engineering Indicators represents “the most authoritative source of information on international trends in science and technology.” The data-packed, two-volume 2008 report issued last week ( would be just the place to go if the makers of Trivial Pursuit come out with a special science policy edition. But it's also a good source for politicians and lobbyists as they debate everything from training the next generation of scientists to the trade balance in high-tech manufactured products.

    Drawing on the myriad studies, surveys, and analyses that make up this year's Indicators, Science offers a few facts that, given the tenor of those debates so far, may come as a surprise to readers.

    Some Things We Know

    Are tenure-track academic positions really an impossible dream for newly minted U.S. science and engineering (S&E) Ph.D.s.?

    Indicators notes that “in recent years, the proportion of all recent doctoral recipients who are in tenure-track academic jobs has increased” (fig. 3–33; see graph). What's more, the share of recent Ph.D.s in mathematics and computer science holding tenure-track posts has rebounded sharply since a dip in 2001. (Overall, 26% of S&E Ph.D.s were in tenure-track positions 4 to 6 years after receiving their doctorates.)

    Bouncing back.

    NSF's survey of doctoral recipients finds that scientists who earned their Ph.D.s 4 to 6 years earlier are having more success in obtaining tenure and tenure-track positions.


    Are more and more foreign-born graduate students really heading home after receiving their U.S. doctoral degrees?

    In reality, “stay rates” for this large and desirable pool of talent are rising (fig. 3–65; see graph) despite the global expansion of the scientific work force. For example, close to 90% of the Chinese-and Indian-born students who earned their Ph.D.s in 2000 were still in the United States in 2005.

    Global citizens.

    Those with temporary visas are increasingly likely to remain in the United States 5 years after earning their S&E Ph.D.s, according to Michael Finn of Oak Ridge Institute for Science and Education. The already high rates for Chinese- and Indian-born students have risen during the past 2 decades.


    Are today's U.S. college students really less interested in S&E than previous generations?

    In fact, the percentage of first-year students who say they intend to major in S&E fields has remained constant for the past 2 decades (appendix table 2–15). And overall, that interest doesn't flag during college: The percentage of degrees awarded in S&E fields is slightly higher than the percentage of students declaring their interest as freshmen. That's because the number of students entering S&E programs more than offsets those who leave for other fields (table 2–6; see graph).

    Major decisions.

    A majority of first-year U.S. undergraduates declaring S&E majors in 1995 stuck with it through graduation, according to a study that followed that cohort. The relative balance varies greatly by field: the agricultural and biological sciences are the most fluid and the physical sciences the least. Significantly, the numbers of those who shifted into non-S&E fields were more than offset by those entering S&E fields.


    Are developing economies exceptional in cranking up their output of S&E graduates?

    China's remarkable expansion of its higher education system has captured the most attention, and its sixfold increase in the number of undergraduate natural science and engineering (NS&E) degrees in the past 20 years is indeed a shocker (fig. 2–35; see graph). But South Korea and the United Kingdom have both nearly tripled their yearly output of first university NS&E degrees since 1985. And even in the United States, with the largest supply, the number of NS&E bachelor's degrees has grown by 31%. (Significantly, that pool is almost entirely domestic. Students on temporary visas receive only 4% of U.S. S&E bachelor's degrees.) At the same time, experts have raised questions about how China's rapid expansion has affected the quality of the education being offered, an issue that is much harder to quantify (Science, 11 January, p. 148).

    The naturals.

    The natural sciences—biology; mathematics; and earth, ocean, agricultural, computer, and physical sciences—are still popular majors for undergraduates around the world, with China poised to overtake the United States in the absolute number of degrees awarded.


    Does Albert Einstein really represent the quintessential scientist to the average adult?

    Although the patent clerk from Bern may be the most recognizable face in the world of science, two recent surveys (table 7–12, fig. 7–13; see graph) found that people think physicians, not physicists, are in the “most scientific” field of study. Medicine was the clear favorite among both Europeans and Americans, well ahead of its companion field of biology and also outdistancing physics and engineering. The social sciences trail the pack, with Europeans naming history as the least scientific among five fields and Americans ranking it behind accounting in an eight-field race.

    Call a doctor.

    Americans and Europeans, in separate surveys covering similar disciplines, agree that medicine is the “most scientific” of fields. (Europeans were not asked about engineering, accounting, or sociology; Americans ranked the latter two sixth and last, respectively, among eight disciplines.)


    What We Don't Know

    1. Jeffery Mervis

    For the first time, the National Science Foundation staff that compiles and writes Indicators confessed in print that there are lots of questions about the state of the S&E enterprise that its authoritative tome doesn't answer. The main reason, says Rolf Lehming, who oversees the volume, is that the data just don't exist or aren't reliable. “Collecting high-quality data can be exceedingly expensive, and governments cannot afford to collect all they could use productively,” he writes.

    Some topics for which data are lacking:

    • Education and training: Informal learning experiences, from online courses to zoos; how math and science teachers are trained and their career paths; how to track emerging fields and multidisciplinary programs; how to compare curricula around the world;

    • Across the labor force: The global flow of S&E workers; lifelong learning and employer training programs;

    • R&D trends: The characteristics of research-intensive businesses; research outside academia, the federal government, and large companies; the outsourcing and offshoring of S&E jobs.


    "Little" Cosmic Ray Observatory Aims to Make a Big Mark

    1. Adrian Cho

    In a field in which bigger is usually better, what can you hope to achieve with a new experiment that's only a quarter as large as its well-established rival? Plenty, say 117 physicists mainly from Japan and the United States who have just started taking data with a cosmic ray observatory that covers 730 square kilometers of western Utah.

    Dubbed Telescope Array, the observatory aims to spot the most energetic subatomic particles from space. Such ultrahigh-energy cosmic rays pack as much energy as a golf ball hitting a fairway, and they strike Earth at a rate of 1 per century per square kilometer. Interest in them grew 10 years ago, when Japanese physicists reported an odd excess of the highest energy rays. It surged last year, when the gargantuan Pierre Auger Observatory in Argentina traced the rays to certain galaxies (Science, 9 November 2007, p. 896).

    Telescope Array aims to test the Auger result and to decipher the nature of the rays. It enters the fray as an underdog: Although it's bigger than the city of Chicago, it's only a quarter the size of Auger, which has been taking data since 2004. But team members say Telescope Array has key technological advantages, and others say it may be better for pursuing certain questions. “This is a very important experiment,” says Veniamin Berezinsky, a theorist at Gran Sasso National Laboratory in Assergi, Italy.

    The Telescope Array collaboration formed when two rival groups merged. Physicists measure the energies of cosmic rays in exa-electron volts, and in 1998, researchers with the Akeno Giant Air Shower Array (AGASA) near Tokyo reported seven rays with energies above 100 EeV. By 2002, they saw 11. That was about 10 times more than expected; if the rays were protons, then on average, interactions with the cosmic microwave background should have sapped their energy to 60 EeV before they had traveled 200 light-years.

    Some theorists took the excess as evidence that the rays were born in decays of exotic particles lingering nearby. But physicists with the High Resolution Fly's Eye (HiRes) detector in Dugway, Utah, argued that there was no excess: They saw only two such rays. The HiRes and AGASA groups studied the rays using different techniques, however. So to resolve the discrepancy, they eventually decided to build an array that would use both.

    When a cosmic ray strikes the atmosphere, it sets off an avalanche of particles called an extensive air shower. AGASA sampled the shower using 111 particle detectors spread over 100 square kilometers. The shower also causes the air to fluoresce, and HiRes studied that light using twin batteries of telescopes. Telescope Array comprises 503 particle detectors and 38 telescopes in three batteries.

    On the range.

    Spaced 1.2 kilometers apart, Telescope Array's particle detectors stretch across the scrub. Its telescopes (inset) perch on nearby hilltops.


    Japan put up $13 million for the $16 million array, but researchers never considered constructing it there. “Building a fluorescence detector in Japan is impossible,” says Masaki Fukushima of the University of Tokyo. “Because of the humidity, the transparency of the air is very limited.” The project got its inapposite name because the Japanese had previously proposed an array of 10 telescopes with no particle detectors. “Once you propose something you don't change the name, because no one will know what you're talking about,” says Pierre Sokolsky of the University of Utah, Salt Lake City. “So even though it makes no sense, the name stuck.”

    Because it's bigger, Auger will see more of the rare rays above 60 EeV. So Sokolsky plans to focus on lower energies and especially on a kink in the spectrum of rays near 4 EeV that might mark the point at which rays from within our galaxy peter out and those from beyond take over. The team has proposed a “low-energy extension” of 100 more-tightly-spaced detectors and two more telescope stations to measure showers with between 0.03 EeV and 10 EeV. “For this, the Telescope Array and especially the low-energy extension is an excellent instrument,” Gran Sasso's Berezinsky says. Auger should have similar additions in place in 2009.

    In contrast, Fukushima hopes to pursue the highest energy rays. Many physicists now doubt the excess reported by AGASA, as neither HiRes nor Auger has seen it (Science, 13 July 2007, p. 178). Still, Fukushima and his Japanese colleagues hope to probe the discrepancy between AGASA and HiRes.

    Telescope Array will also measure a ray's energy more precisely than Auger can, Fukushima says. Auger comprises four telescope batteries and nearly 1500 particle detectors. But Auger's detectors are of tanks of water, which produces light called Cherenkov radiation when a particle zips through it at near-light speed. Telescope Array's detectors are sheets of plastic scintillator that emit light through another mechanism. “Definitely we are measuring the cosmic rays in a different way and with better energy resolution,” Fukushima says.

    Ultimately, Auger and Telescope Array may be forced to work together. The Telescope Array team hopes someday to expand its observatory, and the Auger team plans to build a far-bigger array in Colorado in a few years. The two arrays could end up being combined, Sokolsky says. “Whatever we scientists might think about it, that's going to be imposed on us by the funding agencies,” he predicts. For now, however, the competition is on.


    Where Has All the Stardust Gone?

    1. Richard A. Kerr

    Surprise has followed surprise for cosmochemists analyzing the dust sample that the Stardust spacecraft returned from comet Wild 2 in January 2006. First, they found tiny flecks of once-molten minerals—material very different from the raw, primordial dust they expected to see. Such unaltered, so-called presolar material was the prime ingredient of the rocky planets and was thought to abound in icy comets. But on page 447, researchers report that they have failed to find a single speck of it.

    “For those of us who study presolar materials, it's turned out to be a bit of a bust,” says cosmochemist Larry R. Nittler of the Carnegie Institution of Washington's Department of Terrestrial Magnetism in Washington, D.C. “Wild 2 seems more related to asteroids than comets,” because all asteroids were altered from the solar system's primitive starting materials. Still, “the mission's been a huge success,” says John Bradley of Lawrence Livermore National Laboratory (LLNL) in California, a co-author of the Science paper. “It's changing the way we think about comets.”

    Before Stardust's return, cosmochemists thought of comets as vaults where the primitive ingredients of the planetary recipe had been locked up. Their best look at the likely ingredients list came from the study of certain meteoritic particles collected in Earth's stratosphere by retired spy planes. Because of their exotic isotopic composition, these particular interplanetary dust particles (IDPs) looked as though they might be comet dust. Presumably, such primitive dust fell into the cold, outer reaches of the nebula that gave rise to the planets and combined with nebular ices to form comets, in which the dust has been preserved ever since.

    An unfortunate match.

    Globs of mineral-riddled glass (left) from a comet sample were created during sample collection, as replicated in the lab (right).


    One of the unaltered components of cometlike IDPs was so-called GEMS (glass with embedded metal and sulfides). And early analyses of particles captured near Wild 2 by Stardust tantalizingly revealed GEMS-like particles. But cosmochemist Hope Ishii of LLNL and her colleagues report in this issue that the GEMS-like particles in Stardust samples were actually forged as Wild 2 dust particles plowed into the wispy glass of the Stardust sample collector at a blistering 22,000 kilometers per hour. The researchers made some themselves by shooting mineral particles into collector material at Stardust velocities. Stardust principal investigator Donald Brownlee of the University of Washington, Seattle, does allow that any true GEMS—which tend to be submicrometer in size—might have been lost on impact with the Stardust sample collector.

    Ishii's group also found only one microscopic “whisker” of the mineral enstatite. Such threadlike crystals are common in primitive, cometlike IDPs, but the lone Stardust find has the wrong orientation to have come from a comet. And what little organic matter could be found in the Stardust sample has a much lower deuterium-hydrogen ratio than the organic matter of cometlike IDPs.

    All in all, “it's looking as if Wild 2 is more like an asteroid than a primitive comet,” says Ishii. Brownlee agrees. Rather than preserving the original ingredients of planets, comets—or at least Wild 2—seem to be loaded with materials first altered by the great heat near the young sun, he says. Then those altered materials must have been carried outward to the outer reaches of the nebula, where comets incorporated them. “I would say a large fraction of the [outermost] nebular materials were probably transported there” from much nearer the sun, Brownlee says, “which is pretty amazing.” Now, no one is at all sure where the solar system's lingering primitive materials might reside.


    Dutch Universities Split Over Nobel Laureate's Rehabilitation

    1. Martin Enserink

    AMSTERDAM, THE NETHERLANDS—Allegations that the late Dutch physicist Peter Debye was cozy with the Nazis before and during World War II have produced a split decision among schools who once honored him. Following the advice of an independent committee, Utrecht University last week exonerated the Nobelist by restoring the name of its Debye Institute for NanoMaterials Science. But Maastricht University, in Debye's hometown, rejected the advice and removed his name from a scientific prize permanently.

    Both universities dropped Debye's name after a book and a magazine article by journalist and science historian Sybe Rispens charged that Debye had “dirty hands” during and after his 1934–1939 stint as director of the Kaiser Wilhelm Institute for Physics in Berlin. Debye asked Jewish members of the German Physical Society to step down in a 1938 letter, for instance. Although not disputing the letter, Debye's defenders said he was neither an anti-Semite nor a Nazi sympathizer but an apolitical figure mainly interested in science (Science, 30 June 2006, p. 1858).

    In November, a 200-page study by Martijn Eickhoff of the Netherlands Institute for War Documentation, which called Rispens's portrayal of Debye a “caricature,” offered a nuanced picture of the scientist. It said Debye had a “survival mechanism of ambiguity.” Based on that report, a committee set up by the two universities and chaired by physicist and politician Jan Terlouw concluded on 17 January that there's “no evidence of bad faith” on Debye's behalf, and that the institutes should reinstate his name. But in a statement, Maastricht University insisted that Debye's role remains “irreconcilable” with an award.

    To Mark Walker, a historian at Union College in Schenectady, New York, who specializes in science in the Nazi era, that is an unsatisfactory ending. “I think the whole affair is unfair to Debye's memory,” he says. “He acted according to his standards. They weren't the standards of a hero, but they weren't that bad.”


    A Time War Over the Period We Live In

    1. Richard A. Kerr

    Like astronomers battling over the status of Pluto, geoscientists are revving up to settle the fate of the interval of time known as the Quaternary, as well as the status, some feel, of an entire field.

    Like astronomers battling over the status of Pluto, geoscientists are revving up to settle the fate of the interval of time known as the Quaternary, as well as the status, some feel, of an entire field

    The real quaternarists.

    Study of the Quaternary includes the environment of early humans.


    The dinosaurs had their Cretaceous period and the reptiles their Jurassic, but for 200 years now, humans have not agreed on what period of geologic time we are living in. It could be the Neogene period. On many geologic time scale charts, the Neogene runs from 23 million years ago to the present. Or it could be the Quaternary. “The Quaternary is the most important interval of geologic history,” says John Clague, former president of the International Union for Quaternary Research (INQUA). On some charts, the Quaternary spans the last couple of million years of time, including when humans took up tools and the world began slipping into icy climatic gyrations.

    Depending on the time scale considered, the Quaternary sometimes takes a position of pride following the Neogene period. But other times it's relegated to sideshow status, and sometimes it's even absent entirely. Indeed, in recent years, the International Commission on Stratigraphy (ICS) “abolished” the Quaternary, according to riled quaternarists. “They tried to suppress it while no one was looking,” says Philip Gibbard of the University of Cambridge in the U.K. “They nearly got away with it, [but] we were not going to have it.” The Quaternary “is a manifestation of our community,” adds Clague. “We don't want anyone denigrating that.”

    Take your pick.

    The Quaternary has been variously portrayed in a secondary status (left), as a subera (middle), and as a period (right).


    Now these geoscientists are heading for a showdown over the Quaternary. At the next quadrennial International Geological Congress this August in Oslo, Norway, the community will consider an ICS proposal that would enshrine the Quaternary as a full-fledged period encompassing 2.6 million years expropriated from the young end of the Neogene. But there are rules for dividing up time, notes marine geologist William Berggren of Rutgers University in Piscataway, New Jersey—rules that yield a consistent and therefore useful common language among geologists. And the quaternarists aren't following them, he says. “This is not going to happen.”

    A matter of time …

    Geologists have divvied up time for the sheer convenience of it ever since the late 18th century, when they began to realize just how much of it there was, but the formal rules for dividing the geologic time scale started emerging only in the mid-20th century. Most fundamentally, the divisions must be hierarchical—a whole number of the smallest units of time constitute a single unit at the next higher level, and so on. And the boundaries between units must be recognizable worldwide, not just at a few special places. Conflicts between such modern rules and divisions that evolved over centuries linger into the early 21st century.

    The Quaternary came into usage 2 centuries ago as the most recent of four divisions of the fossil record of life: the Primary, Secondary, Tertiary, and Quaternary. Geologists generally used Quaternary to refer to the loose soil and sediment moved around by the glaciers of the ice ages. That sediment held a distinctive set of fossils, living representatives of which are still common. But, Primary and Secondary fell out of use long ago, supplanted by other names. In recent decades, ICS—with the consent of the International Union of Geological Sciences (IUGS), the world's ruling body on such matters—dropped the Tertiary as well. Now, the Quaternary name “doesn't make any sense,” concedes Norman Catto of Memorial University of Newfoundland in St. John's and editor-in-chief of Quaternary International. “It's the fourth division of a system in which the other three divisions have been thrown out.”

    The Quaternary may be a lingering anachronism, but “the name is less important than the concept,” says Catto. “We have the strong [early] human element involved. That sets it apart. And it's defined as a time of glaciation.” Indeed, many INQUA researchers are, strictly speaking, not geologists but anthropologists, climatologists, glaciologists, or paleoecologists, he says, specialists who are not attuned to the niceties of the modern geologic time scale.

    Even as the term “Quaternary” was coming into use, however, another, slightly different interval of time with a different name was also becoming identified with the ice ages. In 1839, a founder of modern geology, Charles Lyell, dubbed what turned out to be the past 1.8 million years the Pleistocene (“most recent”). He defined the interval on the basis of a distinctive set of fossil mollusks; many of those species are still around today.


    But unlike the Quaternary, Lyell's Pleistocene eventually became firmly incorporated in the emerging, official geologic time scale. In 1983, after 35 years of dickering in the community, a joint INQUA-ICS working group defined the beginning of the Pleistocene as that point in an outcrop of marine sediment at Vrica in southern Italy where several species of microfossils make their first or last appearance in the geologic record. Earth's magnetic field flipped about then, too; the reversal is recorded in the sediments around the world. The community drove the “golden spike,” as the marker of a geologic boundary is called, at Vrica because its fossil transitions could be recognized far beyond Italy. Geologists working around the world could tell just where in the geologic record they were.

    … or perspective

    For the next decade or two, the Quaternary languished in the shadow of the Pleistocene. IUGS had ratified the golden spike at the beginning of the Pleistocene “isolated from other more or less related problems, such as … the status of the Quaternary,” as the formal IUGS announcement put it in 1985. And INQUA “was sleeping” through the 1990s, says Gibbard. He would soon change that.

    In December 2001, Gibbard heard that a major scientific publication then in the works—A Geologic Time Scale 2004, 600 pages long, with 40 contributors, and co-sponsored by ICS—would give the Quaternary short shrift. In the book's accompanying wall chart, the Neogene period and its youngest subdivision—the Pleistocene epoch—reigned supreme. The Quaternary made just one appearance, on a separate plot of the comings and goings of the ice ages. It lost out because “partly to our surprise, it had no official rank [in the hierarchy] or length,” says James Ogg of Purdue University in West Lafayette, Indiana, who prepared the chart with Felix Gradstein of the University of Oslo, Norway.

    Although the time scale had no official scientific standing, Gibbard sprang into action. At his instigation, “INQUA said to IUGS we weren't going to take it from ICS,” Gibbard says. “ICS were told in no uncertain terms by IUGS they couldn't ignore the Quaternary community.” In response to the fracas, IUGS President Zhang Hongren of Beijing withheld IUGS's 2007 funding for ICS until ICS properly addressed the Quaternary problem.

    And address the problem it did. “Now I think we've reached a pretty good compromise,” Ogg says. “We hope so.” The proposal gives the last 2.6 million years of the Neogene to an official Quaternary period, beginning about when world ocean circulation shifted and climate swings intensified in a cooling world. “We won a battle,” says Clague. “It goes beyond a name. It's about how people working in the Quaternary are perceived.”

    To follow the rules, some cutting and pasting of the time scale will be required. In order to line up the beginning of the Quaternary with the beginning of the Pleistocene and thus maintain a proper hierarchy, an 800,000-year slice of the earlier Pliocene epoch will have to move up into the Pleistocene. Some geologists are incensed. “All of a sudden they want to move [the Pleistocene] down 800,000 years,” says marine geologist Lucy E. Edwards of the U.S. Geological Survey in Reston, Virginia. “Why? ‘Because we want it.’ It upsets the stability of the nomenclature without a good scientific reason. Many more marine geologists working in the Pleistocene would be completely discombobulated.”

    Critics say the revision violates a basic rule: that boundaries on the time scale are not delineated by climate changes such as revved-up ice ages. Exactly when a climate event appears in the geologic record, they point out, can depend on the latitude where the record was laid down. Edwards says quaternarists would take the boundary to be “when the glaciation started where I work.” Marine geologist Marie-Pierre Aubry of Rutgers University plans to hold firm against the change. “Are we going to give up our principles?” she says. “I don't believe so.”

    Proponents of the proposal point out that the proposal pegs the Quaternary's lower boundary to an extremely well defined climate event: a sharp swing recorded at the same time at all latitudes in marine-sediment oxygen isotopes. But it still doesn't pass muster with marine geologists. “Climate change is not a criterion for defining units except for quaternarists,” Berggren says. “They think climate change at 2.6 million years is the most important thing, [but] climate changes are not unique signals in the record.” Similar climate oscillations precede and follow the chosen swing, he notes, and major episodes of glaciation have occurred for hundreds of millions of years. “The rest of the community is going to ignore it,” he says.

    The arguments will come to a head this August at the 33rd International Geological Congress in Oslo. “We're going to make time for an open forum and discussion,” says Peter Bobrowsky of the Geological Survey of Canada in Ottawa, who is secretary general of IUGS. “We hope to resolve the matter of the Quaternary [in Oslo] or agree on how to resolve it.” He says he expects a good outcome, if only because IUGS has ruled that nothing will be carved in stone before 2009. Oslo “could be a free-for-all,” he says. “It won't be a bloodbath. They are academics.”


    Why We're Different: Probing the Gap Between Apes and Humans

    1. Michael Balter

    Researchers at a high-level meeting probe the ancient question of what sets the human brain apart from those of other primates.

    Researchers at a high-level meeting probe the ancient question of what sets the human brain apart from that of other primates

    Watch me!

    The ability of chimps to learn from each other may have been underestimated.


    GÖTTINGEN, GERMANY—We sometimes see apes and monkeys in the movies, but we never see them at the movies. Although nonhuman primates can do remarkable things—chimps have rudimentary cultures, and some monkeys have highly complex social systems—none shows the kind of creativity and innovation that are the hallmarks of Homo sapiens. Researchers have long puzzled about which human behaviors stem from our primate roots and which are unique to the hominid line.

    Beginning in the 1960s, scientists focused on the similarities, as lab and field studies revealed that the cognitive talents of other primates had been underestimated. But during the past decade or so, researchers say, there has been renewed interest in the traits that set us apart. At a recent meeting* here, anthropologist Carel van Schaik of the University of Zurich, Switzerland, emphasized this evolutionary divergence. “Mind the gap!” he said in a keynote talk. “Humans have a huge number of [novel] characteristics.” Indeed, participants at the meeting, which was designed to compare and contrast humans and nonhuman primates, demonstrated several of these seemingly unique human behaviors: advanced planning (the conference was months in the making), social organization and cooperation (everyone showed up at the same time and place), and culture and teaching through language.

    At the conference, researchers heard evidence that many of these behaviors, such as planning, may have deep evolutionary roots. But some talents, such as cultural innovation, seem unique to our species, and others, including altruism, may represent a novel blend of old and new characteristics. The challenge now, says van Schaik, “is to figure out how one ape among many—humans—could become so radically different.”

    The waiting game

    “Genius,” said the 18th century French naturalist Buffon, “is only a great aptitude for patience.” To many researchers, our ability to trade immediate gratification for long-term rewards sets us apart from other, more impulsive animals. Without patience, activities from planting crops for later harvest to sending space probes to Mars would be impossible. But a talk at the meeting by behavioral ecologist Jeffrey Stevens of the Max Planck Institute for Human Development in Berlin suggests that patience has evolutionary roots that predate the ape-human split—and that in some situations, humans may be even more impulsive than apes.

    Most studies suggest that animals have a low tolerance for delayed gratification. When offered a choice between two food pellets immediately or six pellets later, pigeons will wait only about 3.5 seconds for the larger reward. Rats are only slightly less impulsive in similar tests, and even monkeys seem to live largely in the present: In a 2005 study, Stevens found that the patience of marmosets wore thin after 14 seconds. One notable exception is the scrub jay, which stores food for later use and probably represents a case of parallel evolution, says psychologist Nicola Clayton of the University of Cambridge in the U.K., who led the jay research (Science, 23 February 2007, p. 1074).

    In new studies, Stevens and his co-workers measured how long our closest relatives, chimpanzees and bonobos, would play the waiting game. The apes were placed in an apparatus designed to give them a choice between two grape halves immediately or six grape halves later. (Trial runs taught the apes that the larger food amounts arrived after a delay.) Bonobos accepted a delay of about 74 seconds, whereas chimpanzees sweated out a full 2 minutes to get the larger reward—although they did a lot of fidgeting and head-scratching while they waited.

    The experiment shows that a capacity for delayed gratification was already present in the common ancestor of humans and apes, says Stevens. “The ability to restrain impulsiveness would certainly seem to be a prerequisite for the sort of planning we see in many human activities,” agrees primatologist Dorothy Cheney of the University of Pennsylvania.

    Stevens also tried to directly compare humans and chimps in a similar experiment. To his surprise, humans (who were eating raisins, M&M candies, or popcorn) caved much more quickly than apes: About 72% of the chimps waited the 2 minutes for a bigger share, whereas only 19% of the humans did so. But given humans' ability to buy groceries for the week, van Schaik suspects that “people did not really take the experiments as seriously as the chimps.”

    This cricket's on me

    Although chimpanzees may be surprisingly patient, they fail miserably at another typically human behavior: lending a spontaneous helping hand to one's neighbor without expecting anything in return. Such altruism is very common among humans, some of whom even sacrifice their own lives to help others. Yet recent work by anthropologist Joan Silk of the University of California, Los Angeles (UCLA) and Michael Tomasello of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, has shown that chimps, although remarkably cooperative in many ways, do not spontaneously help fellow apes. Other work has found that most nonhuman primate cooperation involves self-interested reciprocal exchanges. Many scientists have concluded that true altruism requires higher cognition, including an ability to read others' mental states, called theory of mind (Science, 23 June 2006, p. 1734).

    Yet humans may not be the only altruistic primates. A team led by Judith Burkart of the University of Zurich, which included van Schaik, looked for helping behavior in marmosets, who lack advanced cognition but are highly cooperative. One monkey, the donor, was given a choice of pulling a tray with a bowl that contained a juicy cricket or pulling a tray with an empty bowl into an area where another monkey was sometimes present. Only the recipient could get the food, with no payoff for the donor. Nevertheless, the donor pulled the cricket tray an average of 20% more often when a recipient was present than when it was absent, Burkart said at the meeting. Moreover, the marmosets were about equally generous to genetically unrelated monkeys as they were to their kin.

    Why do marmosets and humans engage in spontaneous altruism when other primates do not? The answer, Burkart proposed, is that both species, unique among primates, are cooperative breeders: Offspring are cared for not only by parents but also by other adults. Marmoset groups consist of a breeding pair plus an assortment of other helpers, whereas human parents often get help from grandparents, siblings, and friends. Burkart suggests that primate altruism sprang from cooperative breeding. In humans, these altruistic tendencies, combined with more advanced cognition, then nurtured the evolution of theory of mind.

    “This is an excellent piece of work,” says Silk, although she cautions against drawing sweeping conclusions about the evolution of human altruism from “just two data points,” humans and marmosets. Nevertheless, Tomasello says, if the results are valid, they “demonstrate that generosity with food and complex cognitive skills are independent adaptations, which humans may have combined in unique ways.”

    Cultural ratchet

    Researchers agree that cultural innovation is one arena in which humans stand alone. Chimps and other primates do show signs of rudimentary culture, such as different traditions in the use of tools to crack nuts (Science, 25 June 1999, p. 2070). But the highly complex cultures produced by human societies are unique to our species. What accounts for this cultural gap?

    Some scientists, including Tomasello and UCLA anthropologist Robert Boyd, who both attended the meeting, have argued that other primates are poor at imitating others and learning from them. Humans, in contrast, are such good imitators that they accumulate culture and knowledge over generations, a “ratcheting” effect that bootstraps the slow pace of biological evolution with a powerful dose of cultural evolution.

    Beyond the family.

    Did cooperative breeding help make both marmosets and humans altruistic?


    Yet studies led by psychologist Andrew Whiten of the University of St. Andrews in Fife, U.K., have found that chimps' ability to imitate might be underrated. Some of these experiments have employed a special food dispenser that can be operated both by poking a stick into it and by using the stick to lift a lever. When chimps who had learned one or the other technique from humans were reintroduced to their peers, the other animals quickly learned to follow their example (Science, 26 August 2005, p. 1311). But Tomasello suspected that the chimps might be emulating the motion of the dispenser rather than imitating another chimp.

    In new work reported at the meeting, Whiten and his co-workers claim to have ruled out that possibility. They tied a length of fishing line to a lever so that they could surreptitiously pull it to deliver a grape. Yet when 12 chimps were exposed to this “ghost” apparatus, none learned to pull the lever themselves. The team concluded that chimps could only learn to use the machine if taught by another chimp or a human—through social learning or imitation.

    “A decade ago, people were doubting” that social learning took place in nonhuman primates, says Joanna Bryson, a cognition researcher at the University of Bath, U.K. “Since then, Whiten has … prove[d] beyond a doubt that it occurs.”

    Whiten said at the meeting that these results suggest that imitation was in place long before cultural ratcheting and imply a somewhat different model for cultural evolution from that of Tomasello and Boyd. The element that kept chimps and possibly early hominids from complex culture might have been a poor ability to innovate, he suggested. For example, early humans made Acheulean hand axes in the same basic form for hundreds of thousands of years.

    Van Schaik agrees with this logic: “It might be that apes … fail to produce anything that goes beyond what they already have.” And Tomasello now says his earlier views require modification. “[Whiten's results] demonstrate that chimpanzee social learning is more powerful than I previously thought,” he says.

    Indeed, for some researchers at the meeting, talks such as Whiten's suggested that the evolutionary gap between humans and other primates might not be insurmountable. “We are just primates with a particular combination of traits,” says Bryson. “Seeing how all those traits came together and exploded into our current culture is really interesting. It makes you wonder whether it might happen soon for another species, given a chance.”

    • *Primate Behavior and Human Universals, Göttingen, Germany, 11–14 December 2007.


    Shell Shock Revisited: Solving the Puzzle of Blast Trauma

    1. Yudhijit Bhattacharjee

    Even at a distance, explosions may cause lasting damage to the brain. Such findings could have big implications for arming and compensating troops.

    Even at a distance, explosions may cause lasting damage to the brain. Such findings could have big implications for arming and compensating troops


    Working at the Military Hospital in Belgrade during the brutal Balkan war of the 1990s, neurologist Ibolja Cernak encountered a medical enigma. She saw soldier after soldier with memory deficits, dizziness, speech problems, and difficulties with decision-making—but no obvious injury. Cernak recalls one 19-year-old who went to a grocery store and began to weep after he couldn't remember how to get back home. When his mother brought him to the hospital a few days later, Cernak learned what later emerged as a common element in all these cases: The soldier had survived an explosion on the battlefield.

    The strange thing was that most of these patients had not suffered a direct injury to the head. And yet, in computed tomography and magnetic resonance imaging scans, Cernak saw signs of internal damage. In some cases, the brain's ventricles—channels that carry cerebrospinal fluid—had become enlarged; and in some, there was evidence of minor bleeding. But when Cernak dug into the medical literature for an explanation, she came up empty. According to the available research, shock waves from an explosion injure mainly air-filled organs such as the lung and the bowel, not the brain.

    With a small band of collaborators in Belgrade, China, and Sweden, Cernak undertook animal studies that eventually confirmed that blast waves can cause neuronal damage. The work drew little attention until 2 years ago when hundreds of U.S. and British soldiers began returning from Iraq with symptoms similar to those of Cernak's patients. As roadside explosions became more common, military doctors suspected that these symptoms were the likely result of mild traumatic brain injury (TBI) sustained in blasts. Seeing her observations borne out was as if “a myth had become reality,” says Cernak, who is now a researcher at the Applied Physics Laboratory at Johns Hopkins University in Baltimore, Maryland.

    How blasts affect the brain has since become an urgent question in military medicine. Last summer, the U.S. Congress gave $150 million to the Department of Defense (DOD) for the first year of research on TBI—both severe injuries that damage the skull and milder ones suspected of causing neurological deficits. The Defense Advanced Research Projects Agency (DARPA) has already launched a $9 million research program aimed specifically at understanding trauma caused by shock waves, heat, and electromagnetic radiation emanating from blasts. Another $14 million a year is going to the Defense and Veterans Brain Injury Center (DVBIC), a DOD-funded agency headquartered in Washington, D.C., for research and outreach on TBI.

    This flurry of interest has focused a spotlight on Cernak's research. There is growing consensus that blasts can produce subtle injuries in the brain as suggested by Cernak several years ago. In fact, the Department of Veterans Affairs (VA) proposed a new rule this month acknowledging blast-related TBI as a special neurological condition whose symptoms may have gone undetected in the past. The proposed rule, published in the Federal Register on 3 January, would allow for greater disability compensation to victims than is granted currently.

    But many researchers are skeptical of Cernak's ideas about how these injuries might occur. Cernak postulates that blast waves ripple through the victim's torso up into the brain through the major blood vessels, leading to neurological effects that can be slow to appear. Although she has evidence from animal experiments to back up that hypothesis, she admits that more research is needed. If the mechanism is confirmed by future studies, Cernak says, it would mean that helmets do not protect the brain against blast injury.

    Besides raising questions about the protection of troops currently in combat, Cernak's suggestion that simply being exposed to an explosion might lead to long-lasting brain damage has opened a Pandora's box, particularly for veterans. It implies that some could be suffering from neurological deficits that went undiagnosed or were mistakenly attributed to posttraumatic stress disorder (PTSD). Indeed, since the government began putting out information about blast-related TBI, veterans have been trickling in to seek treatment for mental problems that some have lived with for decades. “It may well be that blast injuries follow the pattern of Agent Orange and Gulf War syndrome,” says former VA psychiatrist David Trudeau, referring to ill-defined health problems that have lingered for years after battle.

    Hidden trauma

    If Cernak had been a doctor during World War I, she says, she might well have recognized mild TBI among the thousands of soldiers who suffered from what was simply called “shell shock.” But during World War I, many doctors and military commanders viewed shell shock as a transient psychological phenomenon that affected soldiers who, in their opinion, were mentally weak.

    Cernak discovered something very different: that soldiers' mental problems seemed to be driven by enduring physical changes in the brain. To test her hypothesis, she conducted a study of 1300 patients who had suffered penetrating wounds to the lower body but not the head. More than half had suffered injuries in a blast; the rest had been wounded by projectiles. Many of the blast victims complained of symptoms such as insomnia, vertigo, and memory deficits, and more than 36% in this group showed irregular patterns of electrical activity in the brain—as measured by electroencephalograms taken within 3 days of the injury—compared to only 12% in the other group. A year later, 30% of blast-injured patients still showed abnormal brain activity compared to 4% of the rest. Cernak says the findings, published in the Journal of Trauma in 1999, suggested that the mental problems of blast victims had a biological basis.

    Her study wasn't the first to make that point. A year earlier, VA researchers had found that among veterans with PTSD, individuals with a history of blast exposure were much more likely than others to have abnormal brain activity as well as cognitive and behavioral problems. “Our evidence pointed to the possibility that blast injury was a long-lasting injury in combat veterans,” says Trudeau, who retired in 2000. He says he was disappointed by the lack of follow-up to the study, published in the August 1998 Journal of Neuropsychiatry. “The reception we got was pretty lukewarm,” he says.

    For decades, Army researchers had been studying the effects of blast waves but with a different focus. They concentrated on how to protect the lungs and bowel because the pressure from an explosion is most likely to shear at the interface of these tissues, where densities differ. DOD was so confident that advanced body armor was protecting troops against lung and bowel injuries that it closed down this research program in 2003. “We thought, why spend more money on this when we've fixed the problem?” says Geoffrey Ling, a neurologist and a program manager at DARPA.

    Then the bad news arrived. As blast survivors from Iraq were air-lifted to hospitals, U.S. Army doctors, including Ling, who was deployed in Iraq in late 2004, began to see patients whose brains had swelled markedly within hours of being close to a blast. Some had clear head injuries but many did not. Even in cases involving visible wounds, the extent of swelling was often much greater than expected, leading neurosurgeons to wonder whether blast waves had played a role in addition to penetrating shrapnel. Ling says the patterns of vascular enlargement seen across a range of patients showed a continuum of brain injury, suggesting that there could be milder versions that were less obvious.


    That suspicion has grown stronger with hundreds of soldiers returning from the war zone complaining of a common cluster of cognitive and behavioral problems. Army doctors say they have encountered many patients who are unable to perform simple addition and subtraction, read more than one sentence at a stretch, or recall simple things like what they had for lunch. “The majority are individuals who lost consciousness or were dazed after a blast but did not sustain overt head injuries,” says Ronald Riechers, a neurologist at Walter Reed Army Medical Center in Washington, D.C. “Within a short time frame, they develop headaches and notice that their reaction time and concentration are not the same as before.” Based on these evaluations, DVBIC estimates that 10% to 20% of all soldiers on duty in Iraq and Afghanistan have suffered some type of TBI.

    Ling says the TBI numbers prompted DOD to restart its research on blast injury, this time with a focus on the brain. DARPA is funding two main projects as part of the first basic science effort on the topic. One will study the mechanical and cellular effects of blast waves in an animal model. Another will look at the consequences of repeated exposures to low-intensity explosions among military breachers, whose job is to blast holes into buildings using shoulder-launched weapons. “Once you know for certain what in a blast is really hurting the brain and how, you can use that to develop therapies and prevention strategies,” says Ling.

    A tsunami in the brain

    Although it is becoming accepted that blast waves can cause TBI, Cernak's theory about how the damage occurs is controversial, and it has implications for how best to protect troops. She hypothesizes that when blast waves strike the body, they transfer kinetic energy and cause pressure in the main blood vessels to oscillate rapidly. A pulse travels up through the neck into the brain, damaging axonal fibers and neurons in the hippocampus, brainstem, and other structures close to cerebral vessels. The shock can also injure cells farther out in the cortical regions.

    That mechanism is entirely different from the more widely studied effects of acceleration or deceleration in a car crash. Researchers know that a crash impact can shake the brain so violently that axonal fibers are torn. Some say victims of explosions could be experiencing a similar whiplashing, in contrast to Cernak's view—which would mean that helmets designed to dampen that effect could help. “I am very skeptical that kinetic energy could be transferred through the vascular system,” says J. Clay Goodman, a neuropathologist at Baylor College of Medicine in Houston, Texas. “It is much more reasonable to consider the blast effects directly on the cranial vault and the brain.”

    Cernak says her findings show the vascular route to be more plausible. In experiments that exposed rats and rabbits to a simulated blast wave in a shock tube—a cylinder through which an air pulse is transmitted at high velocity—Cernak and her colleagues found that immobilizing the animal's head with steel plates to prevent whiplash effects did not protect against hippocampal cell damage, as they reported in the Journal of Trauma in 2001. Cernak says the vascular-transmission theory could explain the unique combination of symptoms in blast-induced TBI, as well as why neurological symptoms are seen in soldiers wearing helmets. For example, memory deficits hint at damage to the hippocampus, whereas problems in orientation reflect injuries to the cerebellum. “What's happening in blast injury is that these inner structures are being affected,” Cernak says, in contrast to TBIs in traffic accidents and contact sports, where the cortex bears most of the brunt.

    Ripple effect.

    Cernak suggests that the shock waves from a blast reach the brain through major blood vessels.


    Cernak presented unpublished results last month at the Blast Injury Conference in Tampa, Florida, showing that exposure to blast waves can trigger neurodegeneration in rat brains, fragmenting the walls of neurons in the hippocampus and other regions. Similar findings have been published by Annette Säljö, a researcher at the University of Göteborg in Sweden and a collaborator of Cernak's. Säljö and her colleagues reported in the Journal of Neurotrauma in August 2000 that rats exposed to blasts showed a buildup of neurofilament proteins in the cortex and the hippocampus during the week following the injury. This suggests that the damage can worsen over time, like a “slow cooking under the surface,” says Cernak: “One could think of it as a horribly accelerated aging of the brain.”

    If blast waves indeed cause injury by vascular transmission, new types of body armor may be needed. “We would need to develop materials that completely absorb or reflect the full range of blast-wave frequencies generated by an explosion,” says Cernak, adding that current body armor only shields against some of a blast's kinetic energy.

    Cernak has done pioneering work, says John Povlishock, a neuroanatomist at Virginia Commonwealth University in Richmond, adding that she may be right that a “rapid rise and fall in venous pressure” is what stamps the blast's signature on the brain. But more studies are needed to validate her ideas and translate the animal results into humans: “This is a topic with great economic, military, and social implications,” he says, “and as of now, the literature is extremely limited.”

    Needed: A gold standard

    As blast casualties from Iraq have mounted, the U.S. military has stepped up efforts to detect TBI among troops. In July 2006, the Army Surgeon General asked all unit commanders in Iraq to request TBI screening for soldiers displaying “poor marksmanship, delayed reaction times, decreased ability to concentrate, and inappropriate behavior.” Troops who have been in a blast are evaluated by field medics using a short questionnaire that asks, among other things, if the person lost consciousness and had trouble remembering things from just before the explosion. Depending on the severity of the symptoms, they are asked to take a day off or see a neuropsychologist.

    Some veterans groups believe a more aggressive screening policy is needed, especially because the symptoms of blast injury might not show up until later and because subtle injuries might not show up in standard brain scans. The ideal option, some say, would be to use a biomarker: “We'd like to be able to do a blood test to determine the injury,” says Colonel Robert Labutta, a neurologist at the health affairs office at DOD. But until the science of blast injury is established, officials say, it does not make sense to bring home every soldier who has been in the vicinity of an explosion.

    The costs of treating TBI victims from Iraq and Afghanistan could be astronomical. At last count, nearly 25,000 soldiers had been diagnosed with TBI. One estimate of the financial burden, calculated by Harvard researchers, puts the number at $14 billion over the next 20 years. But officials seem determined not to miss any cases among troops coming home: In April, VA mandated TBI screening for all Iraq and Afghanistan veterans who come to VA hospitals for any services, even if it's a dental exam.

    The spotlight on mild TBI has drawn the attention of older combat veterans who were exposed to blasts but were never treated for neurological symptoms. Many were diagnosed with PTSD; some of the symptoms—such as depression, irritability, and attention deficit—overlap with those of mild TBI. These cases, some reaching back to the Vietnam War, could have significant legal and financial implications, says Edward Kim, a psychiatrist with Bristol-Myers Squibb in Plainsboro, New Jersey, and author of a recent report from the American Neuropsychiatric Association on the mental health effects of TBI. “I question whether DOD and the VA really want to open this can of worms,” he says. For example, a veteran with Alzheimer's disease could make a claim pointing to research showing that TBI increases the risk of developing Alzheimer's disease.

    Cernak says she has been receiving e-mails and phone calls from veterans thanking her for her research and seeking more information. Last month, she got a call from a 47-year-old woman who had served in the first Gulf War. The woman had been a teacher before she went to the combat zone, where she was exposed to repeated blasts. After she returned home, she had to stop teaching because she could not remember any facts. The story reminded Cernak why she had begun studying this obscure field 2 decades ago. “Soldiers anywhere are one of the most vulnerable populations in the world,” she says. “It is a moral obligation to help them.”

  13. Project Tracks S&T in '08 Presidential Campaign

    1. Benjamin Somers,
    2. Becky Ham

    As the U.S. presidential primary campaigning builds to high intensity across the country this winter, AAAS has launched a new Web site devoted to science and technology issues in the 2008 election.

    Science and Technology in the 2008 Presidential Election, at, highlights five S&T issues to watch in the election year: competitiveness and innovation; education and the workforce; health care; energy and the environment; and homeland security.

    The site also features the candidates' positions on the major S&T issues, including relevant news stories and published commentaries; survey information; white papers and other reports from policy organizations; and election calendars.

    Although some candidate Web sites feature extensive S&T platforms, these detailed plans have yet to become a key feature in any campaign. During the primary campaign, “the treatment of science-related issues has ranged from superficial to nonexistent,” AAAS CEO Alan I. Leshner wrote in a 4 September 2007 op-ed in the Des Moines Register.

    With a grant from the Richard Lounsbery Foundation, AAAS's Center for Science, Technology, and Congress developed the new site as a resource for voters to explore the candidates' S&T positions, and to provide the research community with a vehicle for informing the candidates on emerging issues. CSTC and its partner in the site, the Association of American Universities (AAU), plan to contact the various presidential campaigns to encourage them to suggest relevant content for the site.

    “As science and technology issues become a part of the political debate in 2008, it is important that voters have access to as much information as possible about the candidates and their views,” said CSTC Director Joanne Carney. “We hope the project will provide this information, give the candidates access to voters who care about these issues, and engender a much-needed dialogue about science, technology, policy, and politics.”

    The project is the result of an informal working group composed of representatives from the AAU, Woodrow Wilson International Center for Scholars, Center for the Study of the Presidency (CSP), National Academies, and other organizations. Meeting last year, the group urged the scientific community to get more involved in political dialogues and encourage the candidates to discuss S&T during their campaigns.

    “In previous elections, science issues have been reduced to one or two hot-button items, with no real discussion of how candidates would utilize science in their administration,” said AAU President Robert M. Berdahl. “But the 21st century will witness a worldwide competition for scientific and technological mastery, and it is our hope that the site will help voters determine how candidates would meet this challenge.”

    Carney said the new site was also motivated in part by a study released jointly by CSP and AAAS, prepared as a report to the U.S. president-elect in the fall of 2000. The report, Advancing Innovation: Improving the S&T Advisory Structure and Policy Process, included discussions from a conference by AAAS and CSP on science policy, presidential leadership, the evolution of the White House Office of Science and Technology Policy, and the importance of congressional support for basic science research. The report eventually became part of a six-volume CSP series delivered to the president-elect that highlighted pressing issues for the coming executive term.

    A special News Focus section in the 4 January issue of Science profiled the leading Republican and Democratic presidential candidates and their stances on science issues such as global climate change, stem cell research, and innovation.

  14. Call for Nomination of 2008 AAAS Fellows

    AAAS Fellows who are current members of the association are invited to nominate members for election as Fellows. A Fellow is defined as a member “whose efforts on behalf of the advancement of science or its applications are scientifically or socially distinguished.” A nomination must be sponsored by three AAAS Fellows, two of whom must have no affiliation with the nominee's institution.

    Nominations undergo review by the steering groups of the association's sections (the chair, chair-elect, retiring chair, secretary, and four members-at-large of each section). Each Steering Group reviews only those nominations designated for its section. Names of Fellow nominees who are approved by the steering groups are presented to the AAAS Council for election.

    Nominations with complete documentation must be received by 9 May 2008. Nominations received after that date will be held for the following year. The nomination form and a list of current AAAS Fellows can be found at To request a hard copy of the nomination form, please contact the AAAS Executive Office, 1200 New York Avenue NW, Washington, DC 20005, USA, at 202-326-6635, or at btucker{at}

  15. Scientists Convene in Boston to Build World Partnerships

    1. Louis Pasteur

    “Science knows no country, because knowledge belongs to humanity, and is the torch which illuminates the world.”

    Researchers, journalists, and the public can explore the world of science and its influence on the world through interdisciplinary studies and emerging policy issues presented at the 175th AAAS Annual Meeting in Boston next month, “Science and Technology from a Global Perspective.”

    This year's program includes more than 150 symposia and special lectures on topics ranging from climate change, infectious disease, technology in developing nations, and emerging international collaborations among researchers. Parents, teachers, health educators, and the public are invited 17 February to a special Town Hall on childhood nutrition and obesity trends around the world.

    In his invitation to the meeting, AAAS President David Baltimore said the conference will highlight the power of science, technology, and education “to assist less developed segments of world society while also improving cooperation among developed countries and spurring knowledge- driven transformation across scientific disciplines.”

    The AAAS Annual Meeting Blog will provide extensive coverage from Boston, featuring reports and podcasts from the staff of Science and ScienceNOW, AAAS's award-winning Science Update radio program, and AAAS's writers, along with links to U.S. and international news coverage of the meeting. For registration and other information about this year's meeting, see