# News this Week

Science  07 Jan 2005:
Vol. 307, Issue 5706, pp. 22
1. INDIAN OCEAN TSUNAMI

# In Wake of Disaster, Scientists Seek Out Clues to Prevention

1. Yudhijit Bhattacharjee*
1. With reporting by Pallava Bagla in New Delhi.

Having claimed more than 150,000 lives and destroyed billions of dollars' worth of property, nature last week reminded the world of the terrible cost of ignorance. Now the nations devastated by the massive earthquake and tsunami that ravaged the Bay of Bengal the morning after Christmas Day are hoping to marshal the political and scientific will to reduce the toll from the next natural disaster.

A week after the tragedy, the question of how many lives might have been saved had authorities in those countries recognized the danger in time to evacuate their coasts remains unanswered. But it's a hypothetical question, because the information needed to take such steps doesn't exist. That's why researchers are gearing up for an international data-collection effort in the affected countries, aimed at improving models of how tsunamis form and setting up a warning system in the Indian Ocean. “This was a momentous event both in human and scientific terms,” says Costas Synolakis, a civil engineer and tsunami researcher at the University of Southern California in Los Angeles. “It was a failure of the entire hazards-mitigation community.”

As relief efforts continue, scientists are traveling to the ravaged coasts to survey how far inland the water ran up at different points along the shorelines, how tall the waves were, and how fast they hit. In addition to providing a detailed picture of the event, says Philip Liu, a tsunami expert at Cornell University who is flying to Sri Lanka this week, information from these field surveys will enable researchers to test computer models that simulate the propagation of tsunami waves and the pattern of flooding when they break upon the shore. The geographical span of the disaster presents an opportunity to “run simulations on a scale that has not been possible with data from smaller tsunamis in the Pacific,” says Synolakis, who is joining Liu in Sri Lanka. Among other surveys being conducted in the region is one led by Hideo Matsutomi, a coastal engineer at Japan's Akita University, who is studying the disaster's effects on Thailand's shoreline.

Testing and refining tsunami models would increase their power to predict future events—not just in the Indian Ocean but elsewhere, too, says Vasily Titov, an applied mathematician and tsunami modeler at the Pacific Marine Environmental Laboratory in Seattle, Washington. Synolakis says the goal is to be able to predict, for any given coast with a given topography, which areas are most vulnerable and thus in greatest need of evacuation.

Such predictions would be easier to make if ocean basins resembled swimming pools and continents were rectangular-shaped slabs with perfect edges. But the uneven contours of sea floors and the jagged geometry of coastlines make tsunami modeling a complex engineering problem in the real world, Titov says. Exactly how a tsunami will travel through the ocean depends on factors including the intensity of the earthquake and the shape of the basin; how the waves will hit depends, among other factors, on the lay of the land at the shore.

What makes tsunami warnings even more complicated, Synolakis says, is that undersea quakes of magnitudes as great as 7.5 can often fail to generate tsunami waves taller than 5 centimeters. “What do you do without knowing precisely where and when the waves will strike and if they will be tall enough to be a threat?” he says. “Do you just scare tourists off the beach, and if nothing comes in, say, ‘Oh, sorry’?”

It wasn't concerns about issuing a false alarm, however, that prevented scientists in India, Sri Lanka, and the Maldives from alerting authorities to the tsunami threat. Instead, researchers say, the reason was near-total ignorance. At the National Geophysical Research Institute (NGRI) in the south Indian city of Hyderabad, for example, seismologists knew of the earthquake within minutes after it struck but didn't consider the possibility of a tsunami until it was too late. In fact, at about 8 a.m., an hour after the tsunami had already begun its assault on Indian territory by pummeling the islands of Andaman and Nicobar some 200 km northwest of the epicenter, institute officials were reassuring the media that the Sumatran event posed no threat to the Indian subcontinent.

About the same time, in neighboring Sri Lanka, scientists at the country's only seismic monitoring station, in Kandy, reached a similar conclusion. “We knew that a quake had occurred—but on the other side of the ocean,” says Sarath Weerawarnakula, director of Sri Lanka's Geological Survey and Mines Bureau, who hurried to his office that morning after feeling the tremors himself. “It wasn't supposed to affect us.”

Walls of water crashing onto the Indian and Sri Lankan coasts soon proved how wrong the scientists were. The waves flung cars and trucks around like toys in a bathtub and rammed fishing boats into people's living rooms. “We'd never experienced anything like this before,” says NGRI seismologist Rajender Chadha. “It took us completely by surprise, and it was a terrible feeling.”

The international scientific community fared somewhat better at reacting to the quake, but not enough to make a difference. An hour after the quake, the Pacific Tsunami Warning Center (PTWC) in Ewa Beach, Hawaii—which serves a network of 26 countries in the Pacific basin, including Indonesia and Thailand—issued a bulletin identifying the possibility of a tsunami near the epicenter. But in the absence of real-time data from the Indian Ocean, which lacks the deep-sea pressure sensors and tide gauges that can spot tsunami waves at sea, PTWC officials “could not confirm that a tsunami had been generated,” says Laura Kong, director of the International Tsunami Information Center in Honolulu, which works with PTWC to help countries in the Pacific deal with tsunami threats.

However, some researchers say that the seismic information alone—including magnitude, location, and estimated length of the fault line—should have set alarm bells ringing. Although not all undersea quakes produce life-threatening tsunamis, the Sumatran quake—later pegged at magnitude 9.0—was “so high on the scale, you had to know that a large tsunami would follow,” says Emile Okal, a seismologist at Northwestern University in Evanston, Illinois. What may have made it difficult for officials to reach that conclusion, says Okal, was the rarity of tsunamis in the Indian Ocean: Fewer than half a dozen big ones have been recorded in the past 250 years.

But even if there had been reasonable certainty that a tsunami was building up stealthily under the waters, scientists say they are not sure what they could have done. As the morning wore on, for example, geophysicists in India realized that “a tsunami would be generated, but how it would travel and when it would strike—we simply had no clue,” says Chadha.

9. SYSTEMATICS

# Philadelphia Institution Forced to Cut Curators

1. Jocelyn Kaiser

A chronic budget shortfall has forced the oldest natural history institution in the United States to lay off 5% of its staff. Outside scientists are especially concerned that the Academy of Natural Sciences in Philadelphia is losing three of its 10 curators, including the overseer of a prized, nearly 200-year-old ornithology collection. The move is part of a trend of cutbacks at natural history museums. “We're losing positions. It's of national concern,” says Smithsonian Institution ornithologist Helen F. James.

The academy, founded in 1812, runs a museum and research programs and houses 17 million biological specimens. Its $12 million annual budget has faced deficits of$500,000 to $1 million for a decade, explains president and CEO D. James Baker, former head of the National Oceanic and Atmospheric Administration. As a result, Baker says leaders made the “painful decision” last month to lay off 13 of 250 employees across all divisions. The layoffs go into effect over the next 6 months. Thomas Lovejoy, head of the Heinz Center, an environmental think tank in Washington, D.C., and an academy board member, says that the cuts were inevitable. “They just had to address” the deficit, he notes. The three curators losing their jobs are Leo Joseph, assistant curator and chair of ornithology; Richard McCourt, an associate botany curator; and Dominique Didier-Dagit, an associate curator of ichthyology. Some outside scientists who asked not to be identified suggest that these junior scientists weren't pulling in enough grant money. Baker doesn't deny the charge, saying that the academy tried to keep staff in “areas where we think there is research support from outside agencies.” (Joseph and McCourt referred calls to an academy spokesperson.) The academy's ornithology collection, which now has no curator, is a paramount concern. The holdings include many of the earliest specimens collected by North American ornithologists as well as the Australia collection of British ornithologist John Gould. Baker says the academy “has made an absolute commitment to preserve” this resource, which will still have a manager to make it available to scientists. But experts worry that the absence of a curator to add specimens and conduct his or her own research could undermine it. “A collection should be part of a living and breathing community,” says A. Townsend Peterson, ornithology curator of the Natural History Museum at the University of Kansas, Lawrence. Baker is mum on future staffing plans, saying only that “we can grow our number of curators” if the budget outlook improves. But he predicts that a focus on certain areas, such as watershed management and molecular systematics, will create “a stronger institution.” 10. UNIVERSITY ASSESSMENT # Funding Woes Delay Survey of U.S. Graduate Programs 1. Jeffrey Mervis The National Research Council (NRC) is having trouble raising enough money for an assessment of U.S. doctoral programs. Everybody agrees that a survey of the quality of U.S. graduate education is important. But the consensus dissolves when it comes to paying for it. The National Academies' NRC is trying to raise$5.2 million for what it hopes will be a bigger and better version of two previous assessments, which appeared in 1982 and 1995, of the relative quality of research doctoral programs. Two foundations—Alfred P. Sloan and Andrew W. Mellon—have agreed to kick in $1.2 million, roughly the cost of the 1995 survey. But NRC's attempt to collect the rest from the federal government has so far come up empty. “We've talked to many agencies, but we haven't generated any interest,” laments one NRC official. As a result, last month NRC officially postponed by 1 year the scheduled 1 July 2005 start of the assessment, a multistage exercise that includes a compilation of institution and program demographics, an analysis of each faculty member's publishing record, and a polling of graduate students. (An earlier schedule had the survey beginning last summer.) The decision, which study director Charlotte Kuh blames on “a delay in funding,” means an expected publication date of 2008 rather than the original target of 2006. That's a blow to what Princeton University astrophysicist Jeremiah Ostriker calls “the premier way to measure graduate education.” Ostriker chaired an NRC panel whose recommendations on methodology and scope have been incorporated into the new survey (Science, 12 December 2003, p. 1883). The delay cedes ground to commercial rankings, notably by U.S. News and World Report. It also complicates life for U.S. institutions with aspiring programs that look to the NRC survey to validate their progress at a time when graduate schools are facing growing competition from other nations for the world's best students. The holdup is a big disappointment to J. Bruce Rafert, dean of the graduate school at Clemson University in South Carolina, who persuaded his bosses to pony up additional resources to gather data from faculty, students, and staff to pass along to NRC. “I had coordinated data collection with the IT people and held a number of workshops for faculty and staff,” says Rafert. “We were fairly far into this when I heard [about the delay].” Some administrators aren't taking the news lying down. In a meeting last month of graduate deans, Lawrence Martin of Stony Brook University in New York proposed that universities pay an annual subscription fee to raise the necessary funds. “Of course the government has a stake,” says Martin. “But if the feds don't want to pay, then we have to do it another way. For me, it's not an option not to do it.” A modest annual fee, Martin noted, would also allow NRC to update the survey more frequently than the current rate of once every 13 years. The proposal makes a lot of sense to many deans. “It's the best suggestion that I heard at the meeting,” says Rafert. But other administrators are cool, if not downright hostile, to financing the survey that way. Universities would already be paying indirectly for the assessment with a sizable investment of staff time and resources, argues John Vaughn of the Association of American Universities in Washington, D.C., a coalition of 62 major research institutions in the United States and Canada. He also thinks the assessment will generate data that can help the federal government gauge the quality of the scientists whom it is supporting. “I think [a subscription] would be a real mistake because graduate training is a society-wide issue,” says Vaughn. “It's also a slippery slope; if universities pick up the tab for this, then the government may start looking to duck other obligations, too.” Debra Stewart, president of the Washington, D.C.-based Council of Graduate Schools, also fears that the survey's credibility could be tainted if its primary audience also pays the freight. Academy officials hope to meet this month with presidential science adviser John Marburger to make the case for the government's involvement. (Neither of the previous NRC surveys received federal funding, although the National Institutes of Health, the National Science Foundation, and the U.S. Department of Agriculture helped finance the methodology review that Ostriker chaired.) But they may need stronger arguments than those they've used to date. “The NRC survey is well-designed and likely to be an improvement on all previous assessments,” Marburger said in an e-mail to Science. “But it is more directly relevant and useful to the surveyed institutions than to the funding agencies.” One government official who has heard NRC's pitch found it lacking. “We thought that we could use the technical portion of the assessment to help us evaluate our own training programs,” says the official, who requested anonymity. “But that idea doesn't really hold up. We already get a lot of information from our grantees.” At the same time, the official added, some issues of interest to an agency may be too specialized to show up in the NRC survey. Although Vaughn sees NRC's suspension of the survey as a necessary evil, Martin worries that it could be the beginning of the end. “After telling people get ready, get ready for the NRC survey, now I'm sick of talking about it,” says Martin. “It's off the table, as far as I'm concerned.” The uncertainty has also led him to explore other ways to assess the quality of graduate education, such as mining existing databases that measure the quantity and quality of scholarly publications. “It'll provide only a subset of the whole picture,” Martin admits. “But it's something we can do on our own, inexpensively, and repeat as needed.” That's more than the NRC can offer, at least right now. 11. GENETICS # A Genomic View of Animal Behavior 1. Elizabeth Pennisi By integrating studies in genomics, neuroscience, and evolution, researchers are beginning to reveal some of the mysteries of animal behavior Why a dog—or a human for that matter—cuddles up with one individual but growls at another is one of life's great mysteries, one of the myriad quirks of behavior that has fascinated and frustrated scientists for centuries. Here's another: are we hard-wired to tend our young or culturally indoctrinated to have family values? It's no surprise that such mysteries remain unsolved. They are rooted in complex interactions between multiple genes and the environment, and the tools to tackle them have largely been unavailable until recently. But behavioral researchers are beginning to apply techniques that are transforming other areas of biology. They are using microarrays—which can track hundreds or thousands genes at once—to learn, for example, why some honey bees are hive workers and others are foragers, and what makes some male fish wimps and others machos. They are also comparing the sequenced genomes of the growing menagerie of animals, probing whether genes known to influence behavior in one species play similar roles in others. Investigators have even gone so far as to swap gene-regulating DNA sequences between species with different lifestyles; in one case, they transformed normally promiscuous rodents into faithful partners. While these comparative approaches are de rigueur for evolutionary biologists, they are something new for many neuroscientists and others who typically study behavior in a single model organism, says Gene Robinson, an entomologist at the University of Illinois, Urbana-Champaign, who is trying to encourage more crosstalk between disciplines. “There is this clear gulf between people who are using modern genetic techniques to study very specific questions and the people who are studying natural diversity,” adds Steve Phelps from the University of Florida, Gainesville. But as more behavioral scientists take up the tools of genomics and comparative biology, the payoff may be a deeper understanding of the molecular basis of behavior in animals—even people—and how behaviors originally evolved. The field “is very ripe for a productive synthesis,” says Phelps. ## Foraging for genes As gene sequencers turn their attention to deciphering the genomes of dozens of evolutionarily diverse species, a deluge of genome data is beginning to transform some aspects of behavioral science. Instead of just probing the minutiae of how a gene works in one organism, scientists are increasingly investigating how a particular gene operates in multiple species. Take the story of a wanderlust gene studied by Marla Sokolowski of the University of Toronto, Ontario, Canada. Almost 25 years ago, Sokolowski and her colleagues discovered that a then unidentified gene, which they dubbed forager (for), controlled how much a fruit fly wandered. One variant of the gene makes a fly a more active forager—a “rover”—while another variant causes a fly to be less active, a “sitter.” In 1997, her team finally cloned this gene, which codes for a protein called cGMP-dependent protein kinase (PKG), an important cell-signaling molecule (Science, 8 August 1997, pp. 763, 834). The rover variant turned out to generate higher quantities of the signaling protein. This gene has recently proved key to feeding behavior in other invertebrates as well. In 2002, working with Sokolowski and her colleagues, Robinson and Yehuda Ben-Shahar, also from the University of Illinois, found that changes in the activity of for in honey bee brains prompted hive-bound workers to begin to change roles and start actively foraging for food. That same year, other researchers demonstrated that this gene influenced how likely nematodes were to explore their environment. In the May-June 2004 issue of Learning and Memory, Sokolowski and her colleagues demonstrated that the PKG gene affects another behavior—how readily fruit flies respond to sugar. Rover flies are quick to extend their probosis when exposed to sugar and continue to be stimulated by repeated exposure to sugar, while sitters gradually become used to the sweet stuff and ignore it, they reported. “It suggests that rovers may keep on searching for food because they don't [become indifferent to sugar],” says Sokolowski. This constant movement may be an evolutionary advantage for rovers in places where fruits and other foods are scattered. Given the apparent importance of for in the behavior of fruit flies and other species, Sokolowski and Mark Fitzpatrick from the University of Toronto, have now looked across the animal kingdom for the gene and others related to it. They searched public gene databases, and earlier this year, in the February Journal of Integrative and Comparative Biology, they reported finding 32 PKG genes from 19 species, including green algae, hydra, pufferfish, and humans. The strong sequence conservation of the genes between many species hints that they may play a role in food-related behavior in many organisms. “By studying [for] in additional species, we will find out how it modulates foraging behavior in different evolutionary scenarios,” says Sokolowski. ## The buzz about microarrays Comparative genomics is helping researchers pinpoint specific genes involved in some behaviors, but scientists are also using microarrays to cast a broader net. For example, Robinson, behavioral geneticist Charles Whitfield, and their colleagues at the University of Illinois are using these gene expression monitors to study honey bee behavior. They first used microarrays to look at the differences, beyond the PKG gene, that distinguish bees that tended the hives from bees that left the hive for pollen (Science, 10 October 2003, p. 296). Of the 5500 genes examined, they found 2200 whose brain activity varied between the two types of bees. Now they have begun to tease out the role of the hive environment in stimulating “nurse” or “forager” genetic regimes—finding genes that help regulate the PKG gene's activity. They raised newly emerged bees with no exposure to other bees, then used microarrays to test how certain chemicals known to change bee behavior alter the isolated insects' genetic activity. Last year, Christina Grozinger, now at North Carolina State University in Raleigh, showed that a hormone produced by the queen bee shifted gene expression toward the nurse profile, possibly by suppressing the for gene. Ben-Shahar conducted a similar experiment using a hormone that promotes foraging behavior. About half of the genes in the isolated bees shifted in a forager-like direction—and those typically active in hive worker bees turned off. “We had no genes going in the wrong direction,” says Whitfield. Now he and his colleagues are looking at gene expression patterns in bees that either build combs or remove dead bees from a hive. The effort may provide a handle on which genes might promote these construction and undertaker behaviors. Neurobiologist Hans Hofmann of Harvard University uses microarray technology to probe the behavior of fish. He's investigating the genetic basis for the presence of studs and social outcasts among male cichlids. Some macho males sport bright colors, bully their peers, and court females. Others, the wimps, have small gonads and spend most of their time feeding or swimming in schools with other wimps. In certain circumstances, however, wimps become studs and vice versa, switches that seem to be driven by changing environments. In the traditional approach, Hofmann would have tried to track individual genes involved in these transformations. Instead, he turned to microarrays and, in less than a year, has identified 100 genes that likely shape the male's social status. Some are genes that Hofmann had expected to be involved, but others, such as a number for ion channels, were surprises. He and his colleagues are now looking more closely at cichlid brains for differences in expression patterns between genes identified in the array studies. “Some of these genes that we decided to follow up, we would not have looked at without this approach,” Hofmann notes. For both Robinson and Hofmann, microarrays have changed the way they investigate genes and behavior. In the pre-genomics era, both chased after candidate genes—those they had reason to suspect were important. But that tunnel vision “doesn't give you a perspective of how many other [genes] are involved,” Whitfield explains. ## Pathways to behavior The genetic bounty provided by microarrays poses its own challenges, however. The devices can turn up many genes involved in even a simple behavior, and the molecules those genes encode need to be tied together into a logical pathway. Piecing together that jigsaw puzzle is no easy task. Elena Choleris from the University of Guelph has taken on that challenge and has worked out the relatively simple pathway underlying one behavioral response in a rodent. She, Martin Kavaliers at the University of Western Ontario, London, Canada, and Don Pfaff from Rockefeller University in New York have shown the genetic interactions necessary for one mouse to recognize another and to react in a friendly or unfriendly manner. Researchers have known for several years that at least four proteins are involved in this process of social recognition: two estrogen receptors, located in different parts of the brain, and a neuropeptide, oxytocin, and its receptor. Choleris looked at the interplay of these molecules by breeding mutant mice lacking each component. In different groups of mice, she and her team disabled one of the genes encoding the receptors or oxytocin. No matter the genetic defect, the outcome was the same: The mutant mice couldn't tell a familiar mouse from a stranger and were no longer worried about newcomers. From additional experiments, Choleris has deduced some of the protein connections in what she calls a micronetwork, or micronet: One of the estrogen receptors controls oxytocin production in the hypothalamus, while the other receptor works in the amygdala to control the production of oxytocin's receptor. If any component of this micronet is interrupted, the whole pathway breaks down. The micronet exemplifies “how multiple genes act in parallel in an orchestrated manner between different systems and different brain areas,” says Choleris. In the wild, a breakdown of this particular micronet and the resulting social recognition deficits could have powerful implications. Choleris and colleagues have recently found that her mutant mice have a diminished ability to sense and stay away from nearby mice carrying parasites, for example. ## Beyond the gene Microarrays are powerful tools for spotting genes that underlie different behaviors, but the way those genes are regulated may be just as important as the proteins they produce. Take the case of the prairie vole and the meadow vole. The prairie vole (Microtus ochrogaster) is faithful to its mate; meadow voles (Microtus pennsylvanicus) are not. Yet the DNA sequence for vasopressin, the neuropeptide governing this trait, is the same in both species, as is the sequence of the gene for the hormone's receptor protein. There are, however, significant species differences in the number of brain receptors for vasopressin: Prairie voles have a lot more. In 1999, Larry Young, a neuroscientist at Emory University in Atlanta, Georgia, noticed that a regulatory region, a DNA sequence that sits at the beginning of the receptor gene, was longer in the monogamous species. When he put the prairie vole's vasopressin receptor gene and its regulatory region into mouse embryos, the resulting adult rodents were more faithful than is typical for that particular mouse species. The same has now proved true for meadow voles, he and his colleagues reported in the 17 June Nature. When he put the full prairie vole gene, including the regulatory region, for this receptor into meadow voles, males abandoned their promiscuous ways and began acting like faithful prairie voles. Michael Meaney from McGill University in Montreal, Quebec, has found that a different regulatory region, called a promoter, is pivotal in another social relationship, the one between parents and their offspring. In the early 1990s, he and others had demonstrated that when a mother rat fails to lick and groom her newborn pups, those pups grow up timid and abnormally sensitive to stress. The key seems to be methylation, a process in which DNA sequences are chemically modified by the addition of methyl groups to cytosine bases. This often suppresses the activity of a gene. Meaney's team discovered that in mice, a mother's behavior alters the typical methylation of the promoter for the gene for the glucorticoid receptor in her offspring. In the brain, this receptor protein helps set off the cascade of gene expression that underlies the stress response. Before birth, there's no methylation of this gene promoter. But in mice neglected by their mothers, the promoter is methylated shortly after birth, Meaney and his colleagues reported in the 27 June online Nature Neuroscience. This increased methylation causes less of the receptor to be produced, creating anxious animals. And because DNA methylation tends to last the life of the animal, it could explain why the pups' personalities don't change as they mature, Meaney notes. While most behavioral genetics researchers have concentrated on non-human species, some are now slowly venturing into the murky waters of human behavior. Meaney's team, for example, is following 200 mothers and their children, looking at the interplay between maternal care and activity in key genes in the offspring. “The extent to which researchers are finding similar patterns” between animals and people is quite promising, notes Stephen Suomi, a psychologist at the National Institute of Child Health and Human Development, Laboratory of Comparative Ethology, Bethesda, Maryland. These patterns are prompting new research alliances. Genes can represent a common ground, increasing “the links between individuals interested in [neural] mechanisms and the people who are interested in behavior,” explains Andrew Bass, a neuroethologist at Cornell University in Ithaca, New York. With this common ground will come a greater understanding of the brain as it relates to behavior, says Pfaff. And that, he adds, “is exciting to the nth degree.” 12. INFECTIOUS DISEASES # Source of New Hope Against Malaria is in Short Supply 1. Martin Enserink New drugs based on an old Chinese cure could save countless lives in Africa, if health agencies and companies can find ways to make enough It seemed like a classic case of bait and switch. In 2004, the World Health Organization (WHO) and the Global Fund for AIDS, Tuberculosis, and Malaria threw their weight behind a radical change in the fight against malaria in Africa. Old, ineffective drugs were to be abandoned in favor of new formulations based on a compound called artemisinin that could finally reduce the staggering death toll. More than 20 African countries have signed on. But the catch is there aren't nearly enough of the new drugs to go around. Just before Christmas, WHO—which buys the tablets from Novartis for use in African countries—announced that it would deliver only half of the 60 million doses anticipated in 2005, leaving many countries in the cold. “It's a very cruel irony,” concedes Allan Schapira of WHO's Roll Back Malaria effort. Other companies producing the drugs have the same problem as Novartis. Artemisinin is derived from plants grown primarily on Chinese and Vietnamese farms, and they have not kept up with demand. Several plans are afoot to create a new, more stable, and cheaper source. Last month, for instance, the Bill and Melinda Gates Foundation announced a$40 million investment in a strategy to make bacteria churn out a precursor to artemisinin. But such alternatives will take at least 5 years to develop, so the shortages are likely to persist, warns Jean-Marie Kindermans of Médécins sans Frontières in Brussels.

New malaria drugs are badly needed. The parasite Plasmodium falciparum has developed resistance to the mainstays, such as chloroquine and sulfadoxine-pyrimethamine. The death toll—more than a million annually—is not declining, despite Roll Back Malaria, an ambitious international campaign launched in 1998 to halve mortality by 2010.

Enter Artemisia annua (also known as sweet wormwood or Qinghao), a shrub used for centuries in traditional Chinese medicine. In the 1970's, Chinese researchers discovered that its active ingredient, artemisinin, kills malaria parasites; since then, several chemical derivatives with slightly better properties have been developed. Known by names such as artemether or artesunate, they cure more than 90% of patients within several days, with few side effects observed so far. Best of all, no resistance has been seen yet. To keep it that way, WHO and others recommend that artemisinin compounds always be used with a second drug in a so-called Artemisinin-based Combination Therapy, or ACT.

Widely used in Asia, the introduction of ACTs in Africa has lagged. Countries have been reluctant to make the switch because, at about $2.40 per treatment course, ACTs are 10–20 times more expensive than existing drugs. The Global Fund has also dragged its feet, some allege, by funding the purchase of older, cheaper drugs for too long. Things began to change when an expert group published a scathing letter in The Lancet in January 2004, accusing the Global Fund and WHO of “medical malpractice.” Both organizations denied the claims, explaining that they supported ACTs but that change took time. Both also concede that the ensuing debate spurred them to redouble their efforts. But companies are reluctant to produce the drugs, as are farmers to grow Artemisia, without guarantees that they'll sell—and that's the problem. The Global Fund does not have nearly enough money to fund the drugs' introduction across Africa. Donor countries like the U.S. and the U.K. appear reluctant to spend aid money on market guarantees for big pharma, says Schapira, because it could be seen as lining shareholders' pockets; at an emergency session at WHO just before Christmas, no donors made any commitments. WHO's hope is that growing demand will eventually create a stable artemisinin supply at low prices. Artemisia farms are now springing up in India, and WHO is supporting experiments to grow the plants in east Africa. The Gates Foundation is banking on a less fickle supply route. Over the past 10 years, chemical engineer Jay Keasling and colleagues at the University of California, Berkeley, have spliced nine genes into Escherichia coli bacteria to make them produce terpenoids, a class of molecules that includes artemisinin. With a few genes borrowed from Artemisia, they should be able to produce an artemisinin precursor, Keasling says. On 13 December, the foundation announced a$42.6 million grant to the Institute for OneWorld Health in San Francisco—which bills itself as the world's first non-profit pharmaceutical company—to help Keasling finish the engineering. Then a biotech startup will optimize the process for producing artemisinin—“tons and tons of it,” says OneWorld Health president Victoria Hale—about 5 years from now. Her assumption is that pharmaceutical companies will package OneWorld's artemisinin derivates into ACT tablets and sell them at well under a dollar per treatment.

There's another alternative. Jonathan Vennerstrom and colleagues at the University of Nebraska, Omaha have synthesized a compound called OZ277 (or simply OZ) that, like artemisinin, has a peroxide bridge shielded by large chemical rings. The compound has been tested as an antimalarial in vitro and in animals, and it looks even better than the real thing, Vennerstrom and colleagues reported in Nature in August. Ranbaxy, an Indian pharmaceutical company, is developing it further; a phase 1 safety trial has just been completed.

Ideally, 4 or 5 years from now, OZ will result in new drug combinations that have the power of current ACTs but cost less than a dollar per treatment, says Chris Hentschel, chief executive of the Medicines for Malaria Venture (MMV), a non-profit based in Geneva that supports its development. Still, Hentschel is trying to temper his optimism. Drugs can always fail during testing, and even ACTs may eventually lose their efficacy, like almost every malaria drug before. That's why, despite the new hope, MMV has its pipeline well-stocked with unrelated candidates.

13. ARCHAEOLOGY

# Oldest Civilization in the Americas Revealed

1. Charles C. Mann

Almost 5000 years ago, ancient Peruvians built monumental temples and pyramids in dry valleys near the coast, showing that urban society in the Americas is as old as the most ancient civilizations of the Old World

BARRANCA, PERU—A few miles northeast of this small fishing town, the Pan-American Highway cuts through a set of low, nondescript hummocks in the narrow Pativilca River valley. If they were so inclined, the truckers thundering along the road could spot on the hillocks the telltale signs of archaeological activity—vertical-sided cuts into the earth surrounded by graduate students with trowels, brushes, tweezers, plastic bags, and digital cameras.

The Pativilca, about 130 miles north of Lima, is one of four adjacent river valleys in the central Peruvian seacoast known collectively as the Norte Chico, or Little North (see map below). Pinched between rain shadows caused by the high Andes and the frigid Humboldt Current offshore, this is one of the driest places on earth; rainfall averages 5 cm a year or less. Because of the exceptional aridity, ancient remains are preserved with startling perfection. Yet the same aridity long caused archaeologists to ignore the Norte Chico, because the region lacks the potential for the full-scale agriculture thought to be necessary for the development of complex societies.

Then in the 1990s, groundbreaking research directed by archaeologist Ruth Shady Solis of the Universidad Nacional Mayor de San Marcos established that such societies had existed in the Norte Chico in the third millennium B.C.E., the same time that the Pharaohs were building their pyramids (Science, 27 April 2001, p. 723). And in the 23 December issue of Nature—in what archaeologist Daniel H. Sandweiss of the University of Maine at Orono describes as “truly significant” work—archaeologists Jonathan Haas of the Field Museum in Chicago and Winifred Creamer and graduate student Alvaro Ruiz of Northern Illinois University in DeKalb reported the startling scope of the Norte Chico ruins, which include “more than 20 separate residential centers with monumental architecture,” and are one of the world's biggest early urban complexes. The ruins are dominated by large, pyramid-like structures, presumably temples, which faced sunken, semicircular plazas—an architectural pattern common in later Andean societies. The new work includes 95 radiocarbon dates that confirm the great antiquity of this culture, which emerged about 2900 B.C.E. and survived until about 1800 B.C.E.

The concentration of cities in the Norte Chico is so early and so extensive, the archaeologists believe, that coastal Peru must be added to the short list of humankind's cradles of civilization, which includes Mesopotamia, Egypt, China, and India. Yet the Peruvian coast, as Shady has argued, is in some ways strikingly unlike the others. She points out that most of the Eurasian centers “interchanged goods and adaptive experiences,” whereas the Norte Chico “not only developed in isolation from those [societies], but also from Mesoamerica, the other center of civilization in the Americas, which developed at least 1500 years later.” The result, according to Haas, is that the Norte Chico provides a laboratory in which to observe “that most puzzling phenomenon, the invention of the state.” The people of this ancient, isolated society, says Haas, “had no models, no influences, nobody to copy. The state evolved here purely for intrinsic reasons.”

## Cities without farms

Although the Norte Chico mounds were flagged as possible ruins as far back as 1905, researchers never excavated them because, according to Ruiz, “they didn't have any valuable gold or ceramic objects, which is what people used to look for.” The first full-scale excavation took place in 1941, when Gordon Willey and John M. Corbett of Harvard discovered a single multiroomed building at Aspero, a salt marsh at the mouth of the Supe River. Puzzled by what seemed to be an isolated structure, the team took 13 years to publish their data.

Willey and Corbett also noted a half-dozen odd “knolls, or hillocks,” which the two men described as “natural eminences of sand.” Thirty years later, in the 1970s, Willey returned to Aspero with archaeologist Michael E. Moseley, now at the University of Florida at Gainesville. They quickly established that the site actually covered 15 ha and that the natural knolls were, in truth, “temple-type platform mounds.” It was “an excellent, if embarrassing, example,” Willey later wrote, “of not being able to find what you are not looking for.” When carbon dating revealed that the site was very old, Moseley says, “it became obvious that Aspero was something big and important.”

It was also a conundrum. All complex Eurasian societies developed in association with large river valleys, which offered the abundant fertile land necessary for agriculture. And social scientists have long believed that the organization of labor necessary for agriculture was the wellspring of civilization. Aspero, on a little river that coursed through a desert, had almost no farmland. “We asked, ‘How could it sustain itself?’” Moseley says. “They weren't growing anything there, or almost anything.”

The question prompted Moseley in 1975 to draw together earlier work by Peruvian and other researchers into what has been called the MFAC hypothesis: the maritime foundations of Andean civilization. He proposed that there was little agriculture around Aspero because it was a center of fishing, and that the later, highland Peruvian cultures, including the mighty Inca, all had their origins not in the mountains but in the great fishery of the Humboldt Current, still one of the world's largest. Bone analyses show that late-Pleistocene coastal foragers “got 90% of their protein from the sea—anchovies, sardines, shellfish, and so on,” says archaeologist Susan deFrance of the University of Florida, Gainesville (Science, 18 September 1998, pp. 1830, 1833). “Later sites like Aspero are just full of [marine] fish bones and show almost no evidence of food crops.” The MFAC hypothesis, she says, boils down to the belief “that these huge numbers of anchovy bones are telling you something.”

Despite its explanatory power, the hypothesis had to be modified when Shady began work at Caral, almost 23 kilometers upriver from Aspero. One of 18 sites with monumental and domestic architecture found by Shady's team, Caral covered 60 ha and was, in Shady's view, a true city—a central location that provided goods and services for the surrounding area and was socially differentiated, with lower-class barrios in the periphery and elite residences with painted masonry walls in the center. Dating to about 2800 B.C.E., Shady says, Caral was dominated by a pyramid bigger than a football field at the base and more than seven stories high, overlooking a plaza bordered by smaller monumental structures. The big buildings suggested a large resident population, but again there were plenty of anchovy bones and little evidence of subsistence agriculture. The agricultural remains were mainly of cotton, used for fishnets, and the tropical tree fruits guayaba and pacae. All were the products of irrigation. At the Norte Chico, the Andes foothills jut close to the coast, creating the sort of swiftly dropping rivers that are easiest to divert into fields.

To Moseley, the abundance of fish bones at Caral suggested that the ample protein on the coast allowed people to go inland and build irrigation networks to produce the cotton needed to expand fishing production. Caral thus lived in a symbiotic relationship with Aspero, exchanging food for cotton.

## The making of a state

The central structures in Norte Chico cities were constructed in what Haas believes to have been sudden bursts of as few as two or three generations. The buildings were made largely by stacking, like so many bricks, mesh bags filled with stones. So perfect is the preservation that the Peruvian-American team can remove 4,000-year-old “bricks” from the pyramids almost intact, the cane mesh around them still in place. (Along with food remains, the mesh provided many of the samples used for carbon dating.) But the impressive size of the monuments is not matched by a rich material culture; the Norte Chico society existed before ceramics.

According to Creamer, Haas, and Ruiz, the sheer scale of the inland sites raises a major challenge to the MFAC hypothesis. “The great bulk of the population lived inland in these cities,” Creamer says. “If there were 20 cities inland and one on the coast, and many of the inland cities are bigger than the coastal city, the center of the society was inland.”

But defenders of the MFAC hypothesis remain convinced that the coastal areas were of primary import. “What may be important,” says deFrance, is not the scope of the society “but where it emerged from and the food supply. You can't eat cotton.”

Whether maritime or inland cities developed first, it seems clear that each depended on the other, and Haas says that this interdependency has major implications. “If I look beyond Aspero at this time, what I see is a bunch of fishing sites all up and down the Peruvian coast. All of them have cotton, but they are on the coast where they can't really grow it. And then you have one big gorilla inland—a concentration of inland sites that are eating anchovies but can't obtain them themselves. It's a big puzzle until you put them together. … I believe we are getting the first glimpses of what may be the growth of one of the world's first large states, or something like it.”

In archaeological theory, societies are often depicted as moving from the kin-based hierarchy of the band to the more abstract authority of the state in order to organize the defense of some scarce resource. In the Norte Chico, the scarce resource was presumably arable land. Haas, Creamer, and Ruiz think that the land was more valuable than generally believed, and that agriculture was more important than allowed for in the MFAC hypothesis. Luis Huaman of the Universidad Peruana Cayetano Heredia in Lima is examining pollen from the Norte Chico sites and promises that “we will see when agriculture came in and what species were grown there.” Regardless of the results, though, the cities of the Norte Chico were not sited strategically and did not have defensive walls; no evidence of warfare, such as burned buildings or mutilated corpses, has been found. Instead, Haas, Creamer, and Ruiz suggest, the basis of the rulers' power was the use of ideology and ceremonialism.

“You have lots of feasting and drinking at these sites,” Haas says. “I have the beginning of evidence that there are the remains of feasts directly incorporated into the monuments, the food remains themselves and the hearths from cooking all built into it.” Building and maintaining the pyramids—so unlike anything else for thousands of miles—was the focus of communal spiritual exaltation, he suggests. A possible focus for the religion is the curious figure Creamer found incised on a gourd. Dated to 2250 B.C.E., it resembles in many ways later Peruvian deities, including the Inca god Wiraqocha, suggesting that the Norte Chico may have founded a religious tradition that existed for almost 4000 years.

Despite their excitement about the new work, MFAC backers see no reason yet to give up the belief that, as Sandweiss puts it, “the incredibly rich ocean off this incredibly impoverished coast was the critical factor.” Only the upper third of Aspero has been excavated, notes deFrance, and its emergence has never been properly dated. If coastal Aspero, though much smaller than the inland cities, is also much older, the MFAC hypothesis might be supported. With Moseley, Shady's team is hoping to resolve the debate by digging to the bottom of Aspero next summer. Meanwhile, Haas, Creamer, and Ruiz have bought a house in Barranca for a laboratory and barracks. They plan to work in the area for years to come. “You're going to be hearing a lot more about the Norte Chico,” Ruiz promises

14. ETHICS

# Is Tobacco Research Turning Over a New Leaf?

1. David Grimm

Scientists developing reduced-harm tobacco products increasingly rely on tobacco industry funding, but some universities and grant organizations want to forbid it

A 65-year-old man sitting at a small table in a lab at Duke University Medical Center in Durham, North Carolina, asks for a cigarette, his twelfth in less than eight hours. A researcher is happy to oblige. As the man lights up, a swarm of technicians buzz around him, drawing blood from a catheter in his arm, making him exhale into a sensor, and administering cognitive tests.

The experiment, led by neuroscientist Jed Rose, focuses on the volunteer's response to a cigarette called Quest, made from tobacco that's been genetically engineered to contain less nicotine. Rose directs the university's Center for Nicotine and Smoking Cessation Research, dedicated to helping smokers kick the habit. He sees the Quest study as an important step in the center's mission because it indicates that smokers of this new product inhale less deeply than smokers of an earlier “reduced-harm” product—the low-tar cigarette—and may therefore be able to decrease their dependence on tobacco. But the work is controversial. Quest's maker, the Vector Tobacco Company of Research Triangle Park, North Carolina, paid for the study, and tobacco giant Philip Morris funds the center.

Since the late 1990's the tobacco industry has provided university researchers with millions of dollars to help develop a new class of reduced-harm products—including modified cigarettes like Quest, tobacco lozenges, and nicotine inhalation devices—ostensibly to reduce the hazards of smoking. Advocates say the industry has turned over a new leaf and is now serious about improving the safety of its products. But critics, who cite the industry's efforts to manipulate science over the past 50 years, see nothing but the same old smoke and mirrors.

Anti-smoking activists tried to stop tobacco's research juggernaut more than a decade ago—and won some battles. But industry funding is flourishing, igniting a debate on some campuses over whether universities should ban tobacco money and whether grant organizations should deny funding to individuals or schools that take this money—as Britain's Wellcome Trust already does and the American Cancer Society is about to do.

It's not a simple issue, says Ken Warner, a public health expert at the University of Michigan, Ann Arbor, and president of the Society for Research on Nicotine and Tobacco. He concedes that the tobacco industry was guilty of misconduct in the past but worries about restricting research. “How do you avoid infringing on academic freedom, and what sort of slippery slope do you create by denying grants on moral grounds?” he asks. “There is a real need for reduced-harm research. The question is, given their history, do we let the tobacco companies fund it?”

## Moral dilemma

Duke University's Rose thinks the tobacco industry's new focus on harm reduction may usher in a healthier era of tobacco-sponsored research. This research is “high quality, innovative, and unique,” he says, and “very different from the abuses of the past.” He adds, “None of the companies that fund our studies have made any attempt to bias our work.”

Rose, a co-inventor of the nicotine patch, argues that vilifying the industry won't help the millions of smokers who are trying to quit. “The real enemy is the death and disease smokers suffer,” he says. “If we can use tobacco money to help people lead healthier lives, why shouldn't we?”

Stephen Rennard, a pulmonary physician at the University of Nebraska Medical Center in Omaha who also receives tobacco industry support, agrees. “I approach this from a public health perspective,” he says. “People are going to continue to smoke, and we need to make them as safe as we can. The tobacco industry needs university research to develop a safer product.”

One of Rennard's projects, funded by RJ Reynolds, evaluated Eclipse—a standard-looking cigarette manufactured by the company that heats rather than burns tobacco, theoretically producing less harmful smoke. Rennard later used Philip Morris money to determine how much smoke the average cigarette user is exposed to. The findings may help the company design a cigarette that reduces the levels of inhaled smoke.

Still, Rennard says that taking industry money required a lot of soul searching. “But in the end I realized that this research should be funded by tobacco companies. NIH resources should not be used to improve cigarettes. It would be like the government subsidizing the development of a better laundry detergent.”

“It's trendy to beat up on the tobacco industry,” Rennard adds. “It's simplistic, and it doesn't help the people who need to be helped. If we delay this research because of concerns about tobacco funding, it could be years before these potentially life-saving products make it to market. That would be the real tragedy.”

## Smoky past

Others think academic researchers should just say no to tobacco money. Simon Chapman, editor of the journal Tobacco Control and a professor of public health at the University of Sydney in Australia, says that despite their new efforts to support harm reduction studies, the tobacco companies have little interest in public health. “They fund this research to buy respectability and ward off litigation,” he says. Some worry that reduced-harm products are just a ploy to keep smokers addicted. Chapman says that scientists need only look at current examples of tobacco company malfeasance—from targeting youth smokers in Myanmar to using athletes to promote cigarettes in China—to see that the companies haven't changed their ways.

For many critics of mixing tobacco money with university research, the industry's history speaks for itself. For example, as the link between smoking and disease became clearer in the early 1950's, the world's largest tobacco companies established the Tobacco Industry Research Committee (TIRC)—later the Council for Tobacco Research (CTR)—to fund research into the health effects of smoking. But its main goal, internal company documents now reveal, was to obfuscate risks, and few of the studies it funded addressed the hazards of cigarettes (Science, 26 April 1996, p. 488).

“During the four decades they operated, TIRC and CTR never came to the conclusion that smoking causes cancer,” says Michael Cummings, the director of the Tobacco Control Program at the Roswell Park Cancer Institute in Buffalo, New York. “These organizations were more about public relations than science.” The industry agreed to shut down CTR in 1998 as part of an agreement—known as the Masters Settlement—that also awarded 46 U.S. states $206 billion in compensation for the cost of treating smoking-related illnesses. But CTR wasn't the only problem. Government prosecutors have charged that the companies frequently killed their own research when it came to unfavorable conclusions, funded biased studies designed to undermine reports critical of smoking, and used the names of respected scientists and institutions to bolster their public image. The industry also lost credibility with its previous attempts at harm reduction when it touted low-tar and filtered cigarettes introduced in the 1950's and '60's as “safer,” says Chapman, while suppressing evidence that smokers drew harder on these cigarettes, thereby increasing their uptake of carcinogens. These charges are being revisited in an ongoing federal racketeering case—the largest civil lawsuit in American history—alleging a 50-year conspiracy by the tobacco industry to mislead the public about the dangers of smoking. For its part, the industry argues that it has reformed; Philip Morris spokesperson Bill Phelps says his company believes that investing in research is the best way to address the health risks associated with smoking. Richard Hurt, the director of the Nicotine Dependence Center at the Mayo Clinic in Rochester, Minnesota, says researchers considering industry money should remember the toll extracted by tobacco use—4.9 million deaths per year worldwide, according to World Health Organization estimates. “For anyone interested in public health, taking this money is a clear conflict of interest,” he says. ## Academic freedom While scientists debate the merits of taking tobacco money, other authorities may take the decision out of their hands. Over the past decade, a number of institutions—including the Harvard School of Public Health and the University of Glasgow—have prohibited their researchers from applying for tobacco industry grants. In addition, organizations such as Cancer Research U.K. and the Wellcome Trust will no longer fund researchers who take tobacco money. The American Cancer Society, one of the largest private funders of cancer research, plans to adopt a similar policy this month. Ohio State University, Columbus, found itself in the eye of the storm in 2003 when Philip Morris offered a medical school researcher a$590,000 grant at the same time a state foundation offered a nursing school researcher a \$540,000 grant. Because the terms of the state grant would have prohibited all other university researchers from taking tobacco money, the school could not accept both. “There was a very heated debate among the faculty,” says Tom Rosol, the university's senior associate vice president for research, who ultimately made the decision to take the Philip Morris grant. “It came down to the issue of academic freedom,” he says. “We didn't want to accept a grant that would have placed restrictions on our investigators.” Rosol's decision sparked a backlash, and several departments, including the Comprehensive Cancer Center and the School of Public Health, enacted tobacco funding bans, barring researchers from taking tobacco money in the future.

A resolution approved by the University of California's (UC) Academic Senate this summer would have the opposite effect. Stating that “no special encumbrances should be placed on a faculty member's ability to solicit or accept awards based on the source of funds,” the proposal would forbid any institutions within the UC system from banning tobacco funding. In a letter endorsing the resolution, UC president Robert Dynes describes such bans as “a violation of the faculty's academic freedom.”

Not everyone buys the academic freedom argument. “The university should be a role model,” says Joanna Cohen, an expert on university tobacco policies at the University of Toronto. “Academic freedom should not override its ethical responsibilities.”

Nor does the American Legacy Foundation, a Washington, D.C., tobacco education and funding organization established by the Masters Settlement, have any qualms about denying grants to institutions that take tobacco money. “We don't see this as an academic freedom issue,” says Ellen Vargyas, the foundation's general council. “The tobacco industry has a bad history, and this is our way of encouraging institutions not to take their money.”

The University of Nebraska's Rennard, who made himself ineligible for state money by accepting tobacco industry funds, finds these policies and the university bans deeply flawed. “Political positions should not determine scientific agendas,” he says. “If we restrict research on moral grounds, should we ban grant money from pharmaceutical companies or industries that pollute the environment? Where do you draw the line?”

As public funding gets tighter, more universities may find themselves confronting this question. The tobacco industry is poised to fill the financial void, but continued charges of company malfeasance will increase the pressure on schools to shun this money. At the end of the day, institutions will have to decide whether to overlook the source of this funding, or take the moral high road and watch it go up in smoke.

15. # As the Galaxies Turn

1. Robert Irion

Spiral disk galaxies, serene icons of the universe, are hardy survivors of a battering cosmic history

Gravity conspires to produce two dominant shapes in astronomy: spheres and disks. Both are on display in spiral galaxies, home to perhaps half the stars in the universe. Spherical central bulges of old yellow suns glow serenely, girdled by a disk consisting of curved arms of hot new stars and dark bands of dust. Such grand stellar disks, long the pinups of astronomy buffs, now play a starring role in studies of how galaxies have evolved.

Surveys with the Hubble Space Telescope reveal a panoply of disks, only hinted at from the ground, that existed when the universe was less than half its current age. By dating and classifying this huge population, astronomers are recognizing that spiral galaxies are not delicate flowers that have blossomed slowly to their current display. Instead, they are tough perennials that have survived mergers with smaller galaxies and—on occasion—crushing collisions with big ones throughout billions of years of cosmic time.

In our edge-on view of the Milky Way's plane, we gaze upon just such a stalwart bisecting the night, one that undoubtedly consumed other galaxies. The Milky Way's disk provides clues to this history, but the sleuthing is tough because we're embedded within it. “We have an opportunity to understand it at a much deeper level than other galaxies, because we can measure the motions of individual stars,” says astronomer Heidi Jo Newberg of Rensselaer Polytechnic Institute in Troy, New York. “But we're really just starting.”

## It's all in the gas

The disks we see today took a long time to develop. “Almost all star formation was in clumps and chaotic structures” for roughly the first 4 billion years of cosmic history, says astronomer Sidney van den Bergh of the Dominion Astrophysical Observatory in Victoria, British Columbia. But during the next 1 billion to 2 billion years, recognizable features started to form under the inexorable pull of gravity.

Astronomers believe that a typical primitive galaxy was a bloated cloud, slowly rotating and rich in warm gas that had not yet coalesced into many stars. Energy escaped from the cloud as atoms and molecules collided and radiated light. Gravity pulled the cooling gas more tightly together, forcing more frequent collisions, but it would have kept its original angular momentum. As time marched on, the fledgling galaxy flattened and spun faster and faster.

“The final state of a runaway collapse is a thin disk where all particles go in exactly circular orbits,” says astrophysicist Julio Navarro of the University of Victoria, British Columbia. But a galaxy isn't an idealized whorl of gas, he notes: “When the gas collects into tiny little packets of stars, you get a collection of bullets that never collide.” Without energy-robbing collisions, a star-filled disk cannot settle down if it gets perturbed by another young galaxy plunging into it—a common event in the cosmic past. Instead, stars tend to scatter into spherical swarms, like a disturbed hive of bees.

This is exactly what happened when astronomers constructed computer simulations of evolving galaxies dominated by stars. “Disks are very fragile, dynamical entities. Mergers mess them up,” Navarro says. But if mergers and collisions were so common in the early universe, why don't we see the sky full of formless elliptical galaxies?

The influence of gas is the key, Navarro and others now agree. Effervescent gas would have damped out the otherwise shattering effects of major mergers. Adolescent galaxies could have kept gas stirred up in plenty of ways: intense ultraviolet light from massive newborn stars, shock waves from supernova explosions, or outpourings of energy from vigorous cores.

Recent simulations have shown this damping effect of gas in action.

For instance, a team led by graduate student Brant Robertson of Harvard University in Cambridge, Massachusetts, produced one of the first realistic disk galaxies in a simulation that spans cosmic history. The model, reported in the 1 May Astrophysical Journal, relies on a “multiphase gas” of cold clouds surrounded by hotter material, which more accurately captures a galaxy's interstellar environment. This hybrid recipe preserves gas during mergers and stabilizes the disk against external on- slaughts, Robertson says. The approach works, but it's only a start: Just 1 of 20 simulated galaxies ended up with a flat pinwheel of stars and gas, compared with about half in the real universe. Improved models may need to churn up the gas even more with early bouts of star formation, other researchers believe.

And in new work submitted to Astrophysical Journal Letters, two of Robertson's co-authors demonstrate that a classic spiral galaxy can emerge even from the wreckage of a violent collision. Astrophysicists Volker Springel of the Max Planck Institute for Astrophysics in Garching, Germany, and Lars Hernquist of Harvard plowed two simulated gas-rich disks into each other. The concussive impact sparked a blaze of star birth, but enough gas remained to settle the merged object into a flat superdisk with clear spiral arms. “If disks can ‘survive’ even major mergers, they are probably less fragile than previously thought,” the researchers write.

## Forty thousand personalities

Simulations are an alluring way to peer back to galactic youth, but nothing beats the real thing. Enter GEMS—Galaxy Evolution From Morphology and Spectral Energy Distributions—an ambitious program to deduce how overall populations of galaxies have evolved. GEMS studies about 40,000 galaxies in the Hubble Space Telescope's largest contiguous color image of the sky: 150 times as broad as its “deep field” image taken in 1994. Astron-omers have good distance estimates to some 10,000 of those galaxies, from spectra obtained at the European Southern Observatory's 2.2-meter telescope at La Silla, Chile.

For astronomers, GEMS has been as transforming as seeing a photo album of hundreds of ancestors rather than just a few faded snapshots. “From the ground, these galaxies are dots. But from Hubble, each one gets a personality,” says lead scientist Hans-Walter Rix of the Max Planck Institute for Astronomy in Heidelberg, Germany.

After more than a year, the GEMS team can make firm statements about the life and times of disks since the universe was about 5 billion years old. For example, the team charted the hottest starlight from newborn suns. “For the last 8 billion years, by far the largest majority of stars have formed in disk galaxies that start to resemble our Milky Way,” Rix says. In contrast, elliptical galaxies had their heyday of spawning stars billions of years earlier.

At the outer reaches of its survey, the team sees what Rix calls “a sufficient number of galaxies with a bulge in the middle and small disks around them.” These objects, he says, are most likely the ancestors of large disk galaxies such as the Milky Way and nearby Andromeda. Moreover, such galaxies grew their disks from the inside out, a maturation that the team traces by comparing the sizes of disks to their distances from us. Today's biggest disks clearly avoided catastrophic disruption from large mergers within the last 8 billion years, Rix says.

The right neighborhood was important. Galaxies evolved more quickly if lots of others were close by, presumably driven by the stronger gravitational influences. “In dense knots, we find some disk galaxies at early times that appear like the Milky Way today, but they are premature,” Rix says. “They are likely to run out of gas and star formation, merge, and become ellipticals. That is their fate.”

## Step up to the bar

Not all is symmetrical in the realm of spiral galaxies. About two-thirds of all disks sport “bars”—elongated concentrations of stars embracing the galactic cores. Our Milky Way has one: a stubby bar first suspected in the 1980s and recently mapped by laborious census of a distinctive class of stars within the disk.

Bars can alter disk galaxies by redistributing mass and angular momentum. “Any kind of perturbation in a cold disk tends to form bars or spiral arms,” says astronomer Shardha Jogee of the University of Texas, Austin. Once formed, a bar tugs gravitationally on gas and pulls it toward the center of the galaxy, triggering the birth of new stars. In theory, this may sow the seeds of a bar's destruction. Some early simulations showed that a central buildup of mass propels passing stars farther out onto great looping paths, dissolving the bar and its narrowly confined stellar orbits.

But more recently, astronomers have wondered how quickly these transitions might happen. “The evolution from barred to unbarred and back again can go on in the lifetime of a galaxy, but there has always been a lot of question about how fast this process is,” says astronomer Mousumi Das of the University of Maryland, College Park. GEMS points to a slower transformation than expected. The team, led by Jogee, found a constant ratio of strongly barred to unbarred galaxies at all epochs. The structures survive at least 2 billion years, if not much longer, the authors concluded in the 10 November Astrophysical Journal Letters.

Another valuable tracer of a galaxy's history is its so-called thick disk, a smattering of older stars that wander above and below the main disk. Astronomers aren't yet sure how stars in the Milky Way's thick disk got there. In one popular scenario, a galaxy merger harassed the stars out of their cozy orbits in the thin disk, perhaps 10 billion years ago. Because there are no stars younger than that in the thick disk, that event probably was the galaxy's last noteworthy consolidation, says astronomer Rosemary Wyse of Johns Hopkins University in Baltimore, Maryland.

However, Julio Navarro and his colleagues think they see imprints of a more fascinating tale. Scrutiny of the motions and chemical compositions of stars in the thick disk reveal a few odd groupings that have properties dissimilar to those of the rest of the galaxy. The team proposes, provocatively, that the thick disk is not a puffed-up set of the Milky Way's own stars but is shot through with aliens. Arcturus, a bright star not far from the sun, could be one such immigrant from a long-ago devoured galaxy.

The next step for astronomers involved in this galactic archaeology will be a thorough charting of the motions of millions of Milky Way stars all around us. One such effort, the Radial Velocity Experiment, is under way at the 1.2-meter U.K. Schmidt Telescope in Siding Spring, Australia. And a proposed extension of the U.S.-led Sloan Digital Sky Survey would examine stars in the galaxy's crowded plane, a region the survey has largely avoided.

Starting in 2011, the European Space Agency's Gaia satellite will scrutinize a billion stars, fully 1% of the galaxy's population. We may then learn how our familiar disk has kept itself together in a universe full of disorderly influences.

16. # Disks of Destruction

1. Robert Irion

Exploding white dwarfs are a key yardstick of the cosmos, but how does gas spiraling onto their surfaces make these stellar corpses blow up?

Most white dwarfs live gently and disappear silently. The remains of stars like our sun, white dwarfs usually cool down for billions of years and fade into black cinders. But some of these stars refuse to go quietly. Aided by matter stolen from other stars, they explode in planet-sized fusion bombs that can outshine entire galaxies.

These blasts, called type Ia supernovas, became famous in 1998 when astronomers used them as flashbulbs of standard brightness to show that the cosmos is expanding at an accelerating rate. Newer studies have validated that startling claim (Science, 19 December 2003, p. 2038). Today, the supernovas are central in efforts to decipher “dark energy,” the bizarre force driving the universe's hastening growth.

Yet tough puzzles confound the cottage industry that now studies type Ia supernovas. Astronomers don't yet know how a white dwarf gains enough mass to seal its doom. In the most popular explanation, the dwarf slowly strips gas from a nearby swollen companion star approaching the end of its life; the stolen gas forms an accretion disk that ultimately sparks the white dwarf's spectacular demise. An alternative explanation sees the donor as another white dwarf spiraling toward a messy crash. What's more, theorists disagree deeply about how a dwarf actually blows up.

With astronomers planning to expand their catalog of type Ia supernovas by thousands and increase their use as cosmological measures, these questions are unsettling. “If you don't understand type Ias, do you trust using them for cosmic evolution?” asks astrophysicist David Branch of the University of Oklahoma, Norman. “Everyone would feel better if we had some understanding of the tools we're using.”

## Shattered to smithereens

Ordinary low-mass stars create white dwarfs when their central nuclear fires burn out. Gas in the core gets crushed to an Earth-sized ball until electrons—pushed to ever-higher orbital energies—resist further squeezing. But if by some method a greedy dwarf attracts extra gas and its mass approaches the magic “Chandrasekhar limit” of 1.44 times the mass of our sun, its peaceful retirement is over.

Researchers at least see eye to eye on the basics of what happens next. “Everyone agrees that type Ia supernovas are thermo-nuclear disruptions of mass-accreting white dwarfs,” says astronomer Mario Livio of the Space Telescope Science Institute (STScI) in Baltimore, Maryland. Close to the critical mass, the increased pressure ignites a chain reaction in the core. New fusion rips the dwarf apart within seconds, Livio says: “The whole thing is shattered to smithereens. There is nothing left behind.”

The debris propelled out by each blast yields the only visible clues. Telescopes detect the spectral signatures of the heavy elements iron and nickel, as well as silicon, calcium, and magnesium. The patterns—especially the amount of radioactive nickel, which powers the supernova's light display—match what astrophysicists expect from the sudden combustion of a white dwarf made mostly of carbon and oxygen.

The first puzzle for astronomers is the origin of those fatal dollops of extra matter. The favored source is a full-fledged star: a binary companion to the white dwarf. When this star's core starts to run out of nuclear fuel, it burns hotter and puffs up its outer atmosphere. If the white dwarf orbits closely enough, it can swipe this loosely bound gas. Gravity pulls the gas into an accretion disk that dribbles matter onto the white dwarf's surface, and the stellar thief heats up with new life.

But the transfer must happen at just the right rate: roughly one 10-millionth of a solar mass per year, or 1/30 of Earth's mass. Any higher or lower than this, theory suggests, and thermonuclear flashes on the white dwarf's surface or vigorous winds can expel much of the matter gained from the accretion disk. This Goldilocks requirement sounds frightfully specific, but astronomers think they see white dwarfs accreting in binary systems at something like the right rate. “There appear to be enough of these to get the right kind of statistics for type Ia supernovas, roughly 3 per 1000 years in the Milky Way,” Livio says.

## Hot on the trail

Recent research has bolstered this scenario. Astronomers identified a star they believe supplied the matter to trigger the most recent type Ia event known in our galaxy: Tycho Brahe's supernova from 1572. A team led by Pilar Ruiz-Lapuente of the University of Barcelona, Spain, found the star dashing through the sizzling supernova remnant three times faster than neighboring stars—just as if it had been set free from a high-velocity binary orbit. In the 28 October issue of Nature, the team reported that the star looks like an aged version of our sun but puffed up, as if by a recent blast wave.

Another study took a different tack to reach a similar conclusion about donor stars. STScI astronomer Louis-Gregory Strolger and colleagues examined 42 distant type Ia supernovas, between 2.4 billion and 9.5 billion light-years away. The team compared how many explosions popped off in different epochs of the universe's history with the times that stars formed. In the 20 September Astrophysical Journal, the astronomers concluded that there is an average time delay of 4 billion years for new star systems to start spawning type Ia supernovas. That's about right for one member of a binary pair to expand at the end of adulthood and lose gas to a compact companion, Strolger says.

But one very different idea lingers stubbornly. Some astronomers hold that type Ia supernovas result from the ruinous mergers of two white dwarfs. Longtime proponent Icko Iben of the University of Illinois, Urbana-Champaign, points to differences from one supernova to the next as a natural outcome of mergers between dwarfs of varying masses. Moreover, type Ia supernovas in ancient elliptical galaxies—where no new stars have formed for billions of years—are best explained by binary pairs of old dwarfs that gradually spiral together, Iben believes.

Most damning of all, in Iben's view, is the missing hydrogen. Bloated stars that donate mass to a white dwarf should pollute the whole environment with hydrogen, he says. And yet astronomers have seen evidence of hydrogen in just one type Ia supernova, in 2002. It's totally lacking in the rest, but crashes of hydrogen-free white dwarfs explain that logically. “To me that is a very powerful argument,” Iben says.

For years, people dismissed this idea by noting that no suitable merger candidates were seen in the Milky Way. But a recent survey by the European Southern Observatory's Very Large Telescope at Cerro Paranal in Chile changed that. Astronomers observed more than 1000 white dwarfs and discovered one massive binary pair that will merge in about 2 billion years, says survey leader Ralf Napiwotzki of the University of Leicester, U.K.

One prominent team is dubious that such pairs will explode. Astrophysicists Ken'ichi Nomoto of the University of Tokyo and Hideyuki Saio of Tohoku University in Japan verified earlier calculations that the lighter of two merging white dwarfs would break up completely and form a thick accretion disk around the more massive one. As this material streams onto the dominant dwarf, carbon within the heated star burns into oxygen, magnesium, and neon, Nomoto says. Once the composition changes, a sudden flash of fusion can no longer occur, and the dwarf collapses “peacefully” into a neutron star. But other theorists maintain that synchronized rotation between the two dwarfs would force a different style of accretion, triggering an all-consuming explosion.

## Got a match?

You might think it would be easy to explain how a single white dwarf blows up, but the story is anything but neat. Theorists know the nuclear physics well, says astrophysicist Alexei Khokhlov of the University of Chicago, Illinois. “But we don't know the initial conditions, and the simulations are very complex,” he says.

Theorists concur that the event must start at or near the dwarf's core, when increased pressure from the added matter sparks runaway fusion among carbon and oxygen atoms. But even this initial step causes angst. “The most crucial question in these detailed models is how the flame ignites, but there's a lot of hand waving involved,” says astrophysicist Wolfgang Hillebrandt of the Max Planck Institute for Astrophysics in Garching, Germany.

Hillebrandt's colleague Friedrich Röpke envisions “a central ignition with a foamy structure. One expects strong convection before the ignition, so it should ignite in several spots distributed around the center.” This wave of burning, called deflagration, spreads erratically through the star's interior at subsonic speeds as the material convects, leaving unburned pockets of carbon and oxygen.

At this point, theorists diverge. The German group thinks deflagration can explain the entire supernova, blowing it apart in about 2 seconds. But others point out that such an explosion would produce debris that is too rich in unburned light elements and too “mixed” from the inside out. Spectra from real type Ia supernovas suggest a more layered explosion, says Oklahoma's Branch.

To better match those data, simulations by other teams invoke “delayed detonation.” Parts of the dwarf burn with a deflagration flame, and then the whole thing explodes with supersonic shock waves. The shocks forge heavy elements throughout the rest of the dwarf in tenths of a second. Indeed, if deflagration is like a wind-driven fire that torches some trees and skips others, detonation is a bomb that incinerates the whole forest. The problem is that no one knows how to set the bomb off. “There is no known physical mechanism to convert deflagration to a supersonic detonation,” says Röpke.

One possible solution recently startled the field. Theorists led by Tomasz Plewa of the University of Chicago set off a simulated detonation with a supersonic bubble that rose from the dwarf's interior, rather like a jellyfish. When the bubble broke the surface, it unleashed waves that raced around the white dwarf—confined by gravity—and collided on the far side. The smashup was violent enough to incinerate the star in an asymmetric blast. No one quite knows what to make of the weird sequence of events, published in the 1 September Astrophysical Journal Letters.

Observers watch these efforts with anticipation and frustration. “We need a guide from theorists to understand the differences in these objects, and I'm not sure whether theory is up to it,” says astronomer Adam Riess of STScI. At one recent talk, Riess says, he counted 15 “knobs” that the theorist turned to adjust his model. “That's a huge number of ways to produce a supernova,” he says. “I found that a little distressing.”

Keep the faith, says Khokhlov: “We have great hope that the explosion mechanism is somehow channeled through a very narrow evolutionary bottleneck,” with a set of unique solutions that theorists will unveil. Many hope that will happen soon, because without it there is an element of doubt when astronomers claim to divine the history and fate of the entire universe.