News this Week

Science  07 Jan 2005:
Vol. 307, Issue 5706, pp. 22

    In Wake of Disaster, Scientists Seek Out Clues to Prevention

    1. Yudhijit Bhattacharjee*
    1. With reporting by Pallava Bagla in New Delhi.

    Having claimed more than 150,000 lives and destroyed billions of dollars' worth of property, nature last week reminded the world of the terrible cost of ignorance. Now the nations devastated by the massive earthquake and tsunami that ravaged the Bay of Bengal the morning after Christmas Day are hoping to marshal the political and scientific will to reduce the toll from the next natural disaster.

    A week after the tragedy, the question of how many lives might have been saved had authorities in those countries recognized the danger in time to evacuate their coasts remains unanswered. But it's a hypothetical question, because the information needed to take such steps doesn't exist. That's why researchers are gearing up for an international data-collection effort in the affected countries, aimed at improving models of how tsunamis form and setting up a warning system in the Indian Ocean. “This was a momentous event both in human and scientific terms,” says Costas Synolakis, a civil engineer and tsunami researcher at the University of Southern California in Los Angeles. “It was a failure of the entire hazards-mitigation community.”

    Surprise attack.

    While tsunami waves ravaged towns such as Lhoknga, Indonesia (as shown in before-and-after satellite photos), scientists across the Bay of Bengal saw no danger coming.


    As relief efforts continue, scientists are traveling to the ravaged coasts to survey how far inland the water ran up at different points along the shorelines, how tall the waves were, and how fast they hit. In addition to providing a detailed picture of the event, says Philip Liu, a tsunami expert at Cornell University who is flying to Sri Lanka this week, information from these field surveys will enable researchers to test computer models that simulate the propagation of tsunami waves and the pattern of flooding when they break upon the shore. The geographical span of the disaster presents an opportunity to “run simulations on a scale that has not been possible with data from smaller tsunamis in the Pacific,” says Synolakis, who is joining Liu in Sri Lanka. Among other surveys being conducted in the region is one led by Hideo Matsutomi, a coastal engineer at Japan's Akita University, who is studying the disaster's effects on Thailand's shoreline.

    Testing and refining tsunami models would increase their power to predict future events—not just in the Indian Ocean but elsewhere, too, says Vasily Titov, an applied mathematician and tsunami modeler at the Pacific Marine Environmental Laboratory in Seattle, Washington. Synolakis says the goal is to be able to predict, for any given coast with a given topography, which areas are most vulnerable and thus in greatest need of evacuation.

    Such predictions would be easier to make if ocean basins resembled swimming pools and continents were rectangular-shaped slabs with perfect edges. But the uneven contours of sea floors and the jagged geometry of coastlines make tsunami modeling a complex engineering problem in the real world, Titov says. Exactly how a tsunami will travel through the ocean depends on factors including the intensity of the earthquake and the shape of the basin; how the waves will hit depends, among other factors, on the lay of the land at the shore.

    What makes tsunami warnings even more complicated, Synolakis says, is that undersea quakes of magnitudes as great as 7.5 can often fail to generate tsunami waves taller than 5 centimeters. “What do you do without knowing precisely where and when the waves will strike and if they will be tall enough to be a threat?” he says. “Do you just scare tourists off the beach, and if nothing comes in, say, ‘Oh, sorry’?”

    It wasn't concerns about issuing a false alarm, however, that prevented scientists in India, Sri Lanka, and the Maldives from alerting authorities to the tsunami threat. Instead, researchers say, the reason was near-total ignorance. At the National Geophysical Research Institute (NGRI) in the south Indian city of Hyderabad, for example, seismologists knew of the earthquake within minutes after it struck but didn't consider the possibility of a tsunami until it was too late. In fact, at about 8 a.m., an hour after the tsunami had already begun its assault on Indian territory by pummeling the islands of Andaman and Nicobar some 200 km northwest of the epicenter, institute officials were reassuring the media that the Sumatran event posed no threat to the Indian subcontinent.

    About the same time, in neighboring Sri Lanka, scientists at the country's only seismic monitoring station, in Kandy, reached a similar conclusion. “We knew that a quake had occurred—but on the other side of the ocean,” says Sarath Weerawarnakula, director of Sri Lanka's Geological Survey and Mines Bureau, who hurried to his office that morning after feeling the tremors himself. “It wasn't supposed to affect us.”

    Walls of water crashing onto the Indian and Sri Lankan coasts soon proved how wrong the scientists were. The waves flung cars and trucks around like toys in a bathtub and rammed fishing boats into people's living rooms. “We'd never experienced anything like this before,” says NGRI seismologist Rajender Chadha. “It took us completely by surprise, and it was a terrible feeling.”

    The international scientific community fared somewhat better at reacting to the quake, but not enough to make a difference. An hour after the quake, the Pacific Tsunami Warning Center (PTWC) in Ewa Beach, Hawaii—which serves a network of 26 countries in the Pacific basin, including Indonesia and Thailand—issued a bulletin identifying the possibility of a tsunami near the epicenter. But in the absence of real-time data from the Indian Ocean, which lacks the deep-sea pressure sensors and tide gauges that can spot tsunami waves at sea, PTWC officials “could not confirm that a tsunami had been generated,” says Laura Kong, director of the International Tsunami Information Center in Honolulu, which works with PTWC to help countries in the Pacific deal with tsunami threats.

    Off the scale.

    The Sumatra quake turned out to be far more powerful than early readings suggested.


    However, some researchers say that the seismic information alone—including magnitude, location, and estimated length of the fault line—should have set alarm bells ringing. Although not all undersea quakes produce life-threatening tsunamis, the Sumatran quake—later pegged at magnitude 9.0—was “so high on the scale, you had to know that a large tsunami would follow,” says Emile Okal, a seismologist at Northwestern University in Evanston, Illinois. What may have made it difficult for officials to reach that conclusion, says Okal, was the rarity of tsunamis in the Indian Ocean: Fewer than half a dozen big ones have been recorded in the past 250 years.

    But even if there had been reasonable certainty that a tsunami was building up stealthily under the waters, scientists say they are not sure what they could have done. As the morning wore on, for example, geophysicists in India realized that “a tsunami would be generated, but how it would travel and when it would strike—we simply had no clue,” says Chadha.

    That's exactly the kind of information that countries in the region hope to have the next time a tsunami comes calling. The Indian government last week announced plans to spend $30 million to set up a warning system within the next 2 years; Indonesia and Thailand have since announced similar plans of their own. Like those in the Pacific, the proposed warning systems will include up to a dozen deep-sea buoys to detect pressure changes that occur as an earthquake's energy travels through the ocean and tide gauges to measure rise and fall in sea level.

    Kapil Sibal, minister of state for science and technology and ocean development, says India plans to collaborate with Indonesia, Thailand, and Myanmar to eventually build a tsunami warning network in the region. “We've been jolted hard, and we'll take remedial action,” Sibal says.


    Chemokine Gene Number Tied to HIV Susceptibility, But With a Twist

    1. Jon Cohen

    Like a long-married couple, a virus and its host shape each other in subtle yet profound ways. AIDS researchers investigating this dynamic have detected several changes in both HIV and humans that likely evolved during the high-stakes wrestling match between the virus, the cells it infects, and the immune system. Now a massive review of DNA from more than 5000 HIV-infected and uninfected people has found that the human genome appears to have responded to the virus by stockpiling extra copies of immune genes that influence a person's HIV susceptibility as well as the course of disease in infected people. These findings may lead to an important practical advance: better designed AIDS vaccine studies.

    Described in the 6 January Science Express (, the DNA analysis focuses on a gene with the ungainly name of CCL3L1. Steven Wolinsky, a virologist at Northwestern University Medical School in Chicago, Illinois, whose lab also has studied the relationship between immune genes and HIV, calls the work “an intellectual and technical tour de force.”

    No vacancy.

    When CCL3L1 (red) occupies the CCR5 receptor on CD4 cells, it blocks HIV's entry.


    Sunil Ahuja, an infectious-disease specialist at the Veterans Administration Research Center for AIDS and HIV-1 Infection in San Antonio, Texas, led an international team that examined the importance of segmental duplications in the human genome. People typically have two copies of each gene (one from each parent), but stretches of DNA sometimes appear repeatedly, causing the overrepresentation of certain genes. Many of the segmental duplications discovered to date include genes related to immunity, inspiring the notion that some duplications protect against invaders such as viruses. Ahuja and co-workers wondered whether HIV might be the target of such an evolutionary response.

    The researchers first hunted for segmental duplications that include CCL3L1 in 1000 people from 57 populations. Immune cells signal one another using chemicals called chemokines, and CCL3L1 codes for one that docks onto the same white blood cell receptor, CCR5, that HIV grabs to infect the cells. In theory, as levels of this chemokine rise, it fills more CCR5 receptors, blocking HIV's ability to infect.

    Ahuja and his colleagues found that the copy number of CCL3L1 varies from person to person and influences an individual's level of the chemokine. But by itself, this number didn't determine HIV susceptibility. Rather, it depended on how many copies a person had compared to others of the same ancestry. For example, their review revealed that Africans had a median of four copies of CCL3L1, whereas Europeans had an average of two. At first blush, this evidence seems to suggest that HIV might have a more difficult time causing harm in Africans. But a closer analysis revealed nothing of the sort.

    The U.S. military for 20 years has closely followed a racially diverse cohort of HIV-infected people. Ahuja joined a team led by Matthew Dolan of the Tri-Service AIDS Clinical Consortium to use DNA from these 1000 people to help unravel the relation between CCL3L1 and HIV. After matching the cohort by race and ethnicity to more than 2000 uninfected controls, the researchers compared how many copies of CCL3L1 each person had. From these data, they concluded that segmental duplications of the gene thwarted infection in the controls and slowed disease in the infected—but only if people had a higher number than average for their racial or ethnic background. And people who had fewer copies of the gene relative to members of their ethnic group—including babies of infected mothers—had increased susceptibility to HIV.

    Factoring in CCL3L1 status could help separate wheat from chaff in AIDS vaccine studies. To date, vaccine testers have paid little attention to differences in genetic susceptibility to HIV. But if a person has, say, a high level of genetic protection, a vaccine might appear to work when it did not. Conversely, highly susceptible people could make a good vaccine look bad. Ahuja and co-workers propose that by analyzing CCL3L1 and similar genetic factors together, researchers could illuminate the now invisible line that separates the effects of vaccines from the power of the host's genes.

  3. JAPAN

    New Budget Accelerates Shift to Competitive Grants

    1. Dennis Normile

    TOKYO—Academic research in Japan appears to have more than held its own in a tight funding year. A 2005 budget adopted last week by the cabinet of Prime Minister Junichiro Koizumi features a 2.6% boost for the direct funding of research, far outpacing a 0.1% rise in overall government spending. It also bucks a 0.8% dip in the country's total science budget, the first such decline in decades. “Given how tight the government budget is, this is not so bad,” says Akio Yuki, vice minister of the Ministry of Education, which accounts for the bulk of Japan's scientific efforts.

    Tuning in.

    Japan will more than double funding this year for the Atacama Large Millimeter/Submillimeter Array in Chile, a joint project under way with the United States and Europe.


    The decline in science-related spending overall, to $34.1 billion in the fiscal year that starts 1 April, is driven by a 22% decrease in defense research and development. The chief cuts are in new weapons systems and aircraft procurement. Most of this money goes to defense contractors, however, and “has little connection to academic research,” says Reiko Kuroda, a biochemist at the University of Tokyo and a member of the Council for Science and Technology Policy, the nation's highest science advisory body. The government also fell short of its 2000 promise to double science spending over 5 years, to an aggregate 24 trillion yen ($229 billion). Officials blame a sluggish economy, although they expect government spending to reach 75% of that goal by the end of the fiscal year.

    The $12.6 billion slated for day-to-day research needs such as supplies and equipment includes a 30% rise in funding for competitive grants, to $4.4 billion. That's part of a concerted effort to wean university scientists off a system of small but universal block grants and onto one that rewards the best ideas. The increased support, up 57% since 2000, comes from a combination of new funding and a diversion of resources from older, directed programs in fields such as nuclear power engineering. “There was a lot of resistance,” Kuroda says about the shift to a more open process (Science, 27 June 2003, p. 2027). But she says that Koizumi, the nominal head of the science council, applied the political pressure needed to bring the bureaucrats in line.

    Universities will also feel the bite of increased competition. The new budget allows them for the first time to claim 30% of selected large grants for administrative costs and overhead. In return, however, the government is cutting back on a fund that supports operating expenses on campus. The bottom line is that universities will become more dependent for their operating expenses on grants to individual researchers, a change that Kuroda and others worry could have a negative impact on institutions that put a greater emphasis on teaching than on research.

    There's good news for universities funded by the Ministry of Education, where science funding is rising almost across the board. In addition to competitive grants, areas receiving significant boosts include big-ticket facilities, such as the Atacama Large Millimeter/Submillimeter Array being built in Chile, and projects expected to have a short-term economic payoff. Favored fields include the life and environmental sciences, nanotechnology, and information technology.

    The science council has not yet settled on spending targets for a third 5-year plan that would run through the 2010 fiscal year. But the business community is already lobbying for continued increases in science. In November, the Keidanren, Japan's most influential business group, called on the government to hold firm to its goal of raising science spending to 1% of the country's gross domestic product. That percentage is expected to stand at 0.8% by the end of the 2005 fiscal year. “The industrial sector has had to cut back on basic R&D,” says Keiichi Nagamatsu, Keidanren's managing director. “We're looking to the universities to fill that role.”

    The cabinet adopted the 2005 budget on 24 December. It now goes to the Diet, Japan's legislative branch, where approval is typically routine.


    Coral Ages Show Hawaiian Temples Sprang From Political Revolution

    1. Erik Stokstad

    Hawaiian legends say a ruler named Pi'ilani brought peace to Maui by routing rival chiefs, marrying a powerful queen, and setting himself up as absolute ruler. Historians agree that this progression from feuding chiefs to kingdom, repeated on several other of the Hawaiian Islands, ultimately created a highly stratified society with elaborate religious rituals that justified the divine right of kings. But they have never been sure how long it took for a religious state to emerge.

    Power base.

    Ruins on Maui suggest that the island's first king exerted control by quickly building temples, such as those seen elsewhere by Captain Cook (left).


    Now a preliminary study of temples on Maui, described on page 102 of this issue of Science, suggests it may have happened within a single generation, around 1600 C.E., just as the stories suggest. By dating coral offerings using a geological technique based on ratios of uranium and thorium isotopes, archaeologist Patrick Kirch of the University of California, Berkeley, and geochronologist Warren Sharp of the Berkeley Geochronology Center have shown that several large temples on Maui were built at about the same time, perhaps within 30 years. The application of this technique is “a major advance in Hawaiian archaeology,” says J. Stephen Athens of the International Archaeological Research Institute Inc. in Honolulu.

    The most sophisticated and stratified societies in the Pacific evolved on the Hawaiian Islands. Oral histories written down in the 19th century provide a rich source of information about the rise of royalty. Other clues come from the many temples these rulers built to demonstrate their divine power and to receive tribute. Yet the technique normally used to measure ancient artifacts, radiocarbon dating, can't get a clear fix on such recent history.

    Kirch and Sharp solved that problem by applying another kind of radiometric dating typically used to date high-and-dry coral reefs and reconstruct the history of sea level. When Hawaiians built temples to agricultural gods, they placed coral into the basalt walls and foundations, presumably as offerings. Because the coral preserves fine details, Kirch and Sharp argue that it was freshly cut from living reefs. By dating the coral, they could find out when the temples were constructed.

    As coral-producing organisms grow, they incorporate uranium atoms in seawater into their skeletons. The uranium atoms decay into thorium-230 at a precisely known rate. So by measuring the ratio of uranium-238 to thorium-230, the researchers could tell precisely how long ago the coral had been cut from the reef.

    To their surprise, samples from eight temples on southeast Maui, including one as large as 1400 square meters (see photo above), all yielded dates between 1580 and 1640 C.E. The samples that most accurately reflected the time of collection from the sea—those from the tips of branches, the youngest part of the coral—yielded an even tighter age range, perhaps as narrow as 30 years. “We can now rule out gradual construction,” Kirch says. “The rapidity is striking.”

    That fast pace, Kirch and Sharp argue, implies a major change in politics. “It looks like one person taking control of the system and ratcheting up [his power],” Kirch says, because only a powerful ruler could have marshaled the labor to build such temples so quickly. Michael Kolb of Northern Illinois University in DeKalb suggests that the similarity of the offerings could also indicate a centralized authority. “The standardization of worship hints at state religion,” he says. “It shows you just how centralized the power was.”

    The ruler could very well have been Pi'ilani, Kirch and Sharp say. A count of generations in the oral histories suggests that he reigned from roughly 1570 to 1600 C.E. Once Maui had been unified, however, Pi'ilani's peace didn't last. Descendents began to fight the kings of other islands in ever-bloodier battles. Interisland warfare lasted until Kamehameha the Great of Hawaii consolidated power in 1805 through the use of weapons obtained from the Europeans.


    Gorging Black Hole Carves Out Gigantic Cavities of Gas

    1. Robert Irion

    The most energetic eruption yet found in space has yielded the first direct measure of a black hole's prodigious appetite. The outburst, still going strong after 100 million years, has gouged two enormous cavities within the hot gas in a distant cluster of galaxies. The stark features show that even mature black holes can disrupt star birth and influence matter far beyond their host galaxies.

    Each of the “supercavities,” reported in the 6 January issue of Nature, could swallow 600 galaxies the size of our Milky Way. To shove aside such vast volumes of gas, the eruption has churned out as much energy as nearly a billion gamma-ray bursts—the most powerful impulsive explosions known. “Seeing this huge amount of energy was quite surprising, one might even say shocking,” says astrophysicist Richard Mushotzky of NASA's Goddard Space Flight Center in Greenbelt, Maryland, who is not part of the research team.

    Gaping holes.

    X-rays from hot gas in a cluster of galaxies (left) outline two “supercavities” cleared out by an eruption from a central black hole (artist's view, right).


    The cavities appear in a galactic group called MS0735.6+7421, about 2.6 billion light-years from Earth. The fully developed cluster looks unremarkable in visible light, says the study's lead author, astronomer Brian McNamara of Ohio University in Athens. At its center resides a supermassive galaxy, bloated by billions of years of consuming smaller galaxies in the cluster. Radio images had revealed a classic double-sided jet of energy streaming away from this central galaxy, suggesting that it hosts a black hole still gorging on infalling gas.

    An 11-hour observation by NASA's Chandra X-ray Observatory exposed voids in the hot gas that pervades the cluster, cleared out along the paths of the radio jets. By tracing the sizes of those voids, the astronomers measured how hard the black hole had to work to displace the gas—in the same way that lungs need to exert more force to inflate a larger balloon. “[The supercavities] allow us to measure the energy deposited by the central black hole into its surroundings in the most direct possible fashion,” McNamara says.

    The calculation shows that the black hole must have devoured about three times the mass of our sun each year for the last 100 million years, says co-author Paul Nulsen of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts. That average rate is similar to the feeding frenzy that probably powered quasars at the cores of galaxies in the early universe, but it's unheard of in modern galaxies. Thus, it appears that black holes within some clusters may have grown at a fantastic rate even in relatively recent times, Nulsen says.

    The eruption also gives tangible evidence of a poorly understood process that helps shape how the cosmos looks, Mushotzky notes. Astrophysicists have long suspected that “feedback” of blazing energy from the centers of galaxies can heat gas for millions to billions of years, preventing new stars from forming as quickly as models predict. The details are still elusive, but the new work offers some insights. “Here, for the first time, you're actually seeing the energy injected and the gas being heated,” Mushotzky says. “We're all really excited.”


    Cesium Collisions Help Create Colder Antihydrogen

    1. Charles Seife

    A clever new way to make antihydrogen may bring scientists one step closer to understanding how matter differs from antimatter.

    In the 31 December issue of Physical Review Letters, a group of physicists describes a laser-assisted technique to make antihydrogen, which mirrors everyday hydrogen by consisting of an antiproton bound to an antielectron. “It's really very different in principle” from previous methods of making antihydrogen, says Gerald Gabrielse, a physicist at Harvard University, who worked with a handful of antimatter-makers known as the ATRAP collaboration to develop the new approach.

    Ave, cesium.

    A beam of excited cesium atoms hits a trap full of antielectrons, and the products fall into a pile of antiprotons. The result: cold antihydrogens.

    CREDIT: ADAPTED FROM C. H. STORRY ET AL., PRL 93, 263401 (2004)

    For years, ATRAP and a rival group, ATHENA, have been cooling antiprotons (which come from a beam at CERN, the European particle physics lab near Geneva, Switzerland) and antielectrons (which come from a radioactive source) and mixing them in a magnetic bottle in hopes of producing antihydrogen. Both teams have created thousands of antihydrogen atoms this way (Science, 15 November 2002, p. 1327). However, those antihydrogens were relatively warm—several degrees above absolute zero—and, therefore, moving too fast to capture and study in detail.

    ATRAP's new method collects antiprotons and antielectrons in separate magnetic traps. Then the researchers shoot atoms toward the antielectrons, exciting the atoms with lasers to force their electrons into larger-than-typical orbits around the nucleus. “We make [the cesium atoms] very big—and a big thing has a higher probability” of striking an antielectron in the trap, says Gabrielse.

    After impact, the cesium's electron binds to the antielectron, forming an unstable and excited conglomerate known as positronium. The positroniums zoom away in all directions, and some wind up in the nearby trap containing antiprotons. Following another collision, the antielectron once again jumps ship and hops to the antiproton, forming an excited antihydrogen.

    This Rube Goldberg-ish method has so far produced fewer than two dozen antihydrogens. But in principle, it should allow physicists to create very cold and slow- moving antihydrogens. “Since the positronium is so lightweight compared to the antiprotons, when they collide, it's very hard for them to heat up the antiprotons,” says Gabrielse. Because physicists can potentially cool antiprotons to within a few hundred thousandths of a degree of absolute zero, this method might, without too much tweaking, yield antihydrogens slow enough to study.

    “Anything that goes in this direction is welcome,” says Rolf Landua, a CERN physicist and member of the ATHENA collaboration. But the low yield is a problem, he cautions, and studying the produced antihydrogen properly will likely require deexciting the atoms, perhaps with another laser. “Maybe, in the end, that will be the way forward, but it looks complicated,” Landua says.

    Unfortunately, scientists will have to wait to find out. The antiproton source at CERN has been shut down until 2006 to speed construction of the Large Hadron Collider.


    Mild Illnesses Confound Researchers

    1. Dennis Normile*
    1. With reporting by Martin Enserink.

    TOKYO—Ten months after an outbreak of highly pathogenic avian influenza, researchers in Japan have confirmed that four employees of an infected farm and one governmental health official are carrying antibodies to the H5N1 virus. These are the first documented cases of mild or asymptomatic infections in humans to emerge from last year's outbreak. In Vietnam and Thailand, the disease resulted in death in more than 70% of confirmed human cases.

    Spot check.

    A worker draws chicken blood for disease testing.


    Viruses “typically” cause a wide range of symptoms in humans, says Yi Guan of the University of Hong Kong, who has studied H5N1 since it emerged there in 1997. Similar results were found in surveys of wild-animal dealers in China after the 2002 severe acute respiratory syndrome outbreak and among cullers and poultry workers in Hong Kong after the 1997 H5N1 outbreak. The new cases should help scientists understand the behavior of avian flu in humans. “It is important to learn what percentage of people exposed to the virus become infected, and among those, how many develop severe and how many develop mild illnesses,” he adds.

    When the Japanese H5N1 outbreak was confirmed at a chicken farm in Kyoto Prefecture last February, Japan's National Institute of Infectious Diseases urged local officials to survey farm workers, health inspectors, and those who destroyed the chickens. Institute virologist Masato Tashiro, director of the World Health Organization collaborative center for influenza surveillance and research in Japan, says the difficulties in detecting low levels of antibodies slowed the work, and then prefectural officials dithered over releasing the results.

    Out of 7000 people potentially exposed, only 58 agreed to participate in the survey. Those 58 included 17 of 19 people who worked on the infected farm before taking the antiviral medication Tamiflu or wearing protective clothing. The five people who proved to be seropositive were among this group; none of those who took Tamiflu before going to the farm or wore protective gear while there proved positive. “We think this does say something about the value of antiviral medication and proper protection,” notes Tashiro.

    Albert Osterhaus, a virologist at the Netherlands' Erasmus University Medical Center in Rotterdam, suggests that the five Japanese could have developed antibodies in response to viral antigens in the farm environment and were never actually infected with the H5N1 virus.

    Why the infections, if they did occur, proved so mild is less clear. Tashiro offers several possibilities. For one, the genetic sequences of the viral strain collected in Japan and Korea varies from that of the strain that appeared later in Thailand and Vietnam. Once the presence of H5N1 was confirmed, farm workers and health official who had visited the farm took Tamiflu, perhaps in time to reduce the severity of the infection. Finally, exposure to the virus could have been more limited than among the patients in Thailand and Vietnam, many of whom raised chickens at home.

    “We don't have any controls, so it's difficult to determine just why these differences occurred,” Tashiro says. Scientists hope that surveys of cullers in Thailand and Vietnam who did not take Tamiflu and were often not wearing proper protective gear may answer these questions.


    Europe Draws Up Its Own Strategy for Visiting the Moon and Mars

    1. Daniel Clery

    CAMBRIDGE, U.K.—President George W. Bush's announcement last January of a major push to explore the moon and Mars may have generated lots of headlines (Science, 16 January 2004, p. 293). But while the fate of that plan remains up in the air, Europe's own strategy for planetary exploration, begun 3 years ago, is gathering real support.

    First step.

    Plans call for a launch of ExoMars in 2011.


    Late last month the European Space Agency (ESA) announced that member states had nearly tripled the budget for the Aurora program, which is planning a series of missions culminating in a crewed visit to Mars in 2033. Although many researchers are wary of the commitment to send astronauts, they generally support Aurora's aims. “As someone who is interested in planets, Aurora can do it for us,” says John Zarnecki of the Open University in Milton Keynes, U.K.

    ESA's initial proposal for Aurora in 2001 attracted just $19 million of the $27 million requested from members—an inauspicious start. Piero Messina, Aurora spokesperson at ESA headquarters in Paris, says the shortfall occurred when Italy, a strong supporter of planetary exploration, suddenly had a change of government and “could not live up to its earlier commitments.”

    ESA researchers began work with what they had, but before long the context had changed. NASA's prolonged grounding of the shuttle fleet in February 2003 and a reduced U.S. commitment to the international space station created problems, whereas NASA's new moon-Mars program opened up new possibilities for collaboration. In July, ESA asked member states to provide new money for studies. Italy came through with $17 million on top of its original $3.4 million, currently making it the largest contributor to the $56 million that has been pledged. Another surprise was the additional $6.7 million from the U.K., a longtime opponent of crewed missions.

    A mission strategy will be hammered out over the coming year. Messina says ESA researchers are working on three possible scenarios for lunar exploration that they will present next month. Aurora's first mission, the 2011 ExoMars orbiter and lander, is already well defined. ESA and NASA are both planning missions to bring samples back from Mars, and officials from both agencies are now working out how they might collaborate. “There is a will to converge,” says Messina.

    The pressure to cut costs will intensify by the year's end when ESA presents the full Aurora program to ministers from member nations. Messina estimates that ESA will need $1.3 billion for the first 5 years to begin building the spacecraft. Ian Halliday, head of the U.K. Particle Physics and Astronomy Research Council, says ESA's current cost projections are “wishful thinking.” Even its supporters don't dare hazard a guess about its prospects. Says Zarnecki, “I haven't a clue.”


    Philadelphia Institution Forced to Cut Curators

    1. Jocelyn Kaiser

    A chronic budget shortfall has forced the oldest natural history institution in the United States to lay off 5% of its staff. Outside scientists are especially concerned that the Academy of Natural Sciences in Philadelphia is losing three of its 10 curators, including the overseer of a prized, nearly 200-year-old ornithology collection. The move is part of a trend of cutbacks at natural history museums. “We're losing positions. It's of national concern,” says Smithsonian Institution ornithologist Helen F. James.

    Scientific treasure.

    The ornithology collection at the financially troubled Philadelphia academy includes specimens of the extinct Australian paradise parrot (Psephotus pulcherrimus).


    The academy, founded in 1812, runs a museum and research programs and houses 17 million biological specimens. Its $12 million annual budget has faced deficits of $500,000 to $1 million for a decade, explains president and CEO D. James Baker, former head of the National Oceanic and Atmospheric Administration. As a result, Baker says leaders made the “painful decision” last month to lay off 13 of 250 employees across all divisions. The layoffs go into effect over the next 6 months. Thomas Lovejoy, head of the Heinz Center, an environmental think tank in Washington, D.C., and an academy board member, says that the cuts were inevitable. “They just had to address” the deficit, he notes.

    The three curators losing their jobs are Leo Joseph, assistant curator and chair of ornithology; Richard McCourt, an associate botany curator; and Dominique Didier-Dagit, an associate curator of ichthyology. Some outside scientists who asked not to be identified suggest that these junior scientists weren't pulling in enough grant money. Baker doesn't deny the charge, saying that the academy tried to keep staff in “areas where we think there is research support from outside agencies.” (Joseph and McCourt referred calls to an academy spokesperson.)

    The academy's ornithology collection, which now has no curator, is a paramount concern. The holdings include many of the earliest specimens collected by North American ornithologists as well as the Australia collection of British ornithologist John Gould. Baker says the academy “has made an absolute commitment to preserve” this resource, which will still have a manager to make it available to scientists. But experts worry that the absence of a curator to add specimens and conduct his or her own research could undermine it. “A collection should be part of a living and breathing community,” says A. Townsend Peterson, ornithology curator of the Natural History Museum at the University of Kansas, Lawrence.

    Baker is mum on future staffing plans, saying only that “we can grow our number of curators” if the budget outlook improves. But he predicts that a focus on certain areas, such as watershed management and molecular systematics, will create “a stronger institution.”


    Funding Woes Delay Survey of U.S. Graduate Programs

    1. Jeffrey Mervis

    The National Research Council (NRC) is having trouble raising enough money for an assessment of U.S. doctoral programs. Everybody agrees that a survey of the quality of U.S. graduate education is important. But the consensus dissolves when it comes to paying for it.

    The National Academies' NRC is trying to raise $5.2 million for what it hopes will be a bigger and better version of two previous assessments, which appeared in 1982 and 1995, of the relative quality of research doctoral programs. Two foundations—Alfred P. Sloan and Andrew W. Mellon—have agreed to kick in $1.2 million, roughly the cost of the 1995 survey. But NRC's attempt to collect the rest from the federal government has so far come up empty. “We've talked to many agencies, but we haven't generated any interest,” laments one NRC official.

    Third time, no charm.

    NRC has put off collecting data for its latest survey of graduate programs.

    As a result, last month NRC officially postponed by 1 year the scheduled 1 July 2005 start of the assessment, a multistage exercise that includes a compilation of institution and program demographics, an analysis of each faculty member's publishing record, and a polling of graduate students. (An earlier schedule had the survey beginning last summer.) The decision, which study director Charlotte Kuh blames on “a delay in funding,” means an expected publication date of 2008 rather than the original target of 2006.

    That's a blow to what Princeton University astrophysicist Jeremiah Ostriker calls “the premier way to measure graduate education.” Ostriker chaired an NRC panel whose recommendations on methodology and scope have been incorporated into the new survey (Science, 12 December 2003, p. 1883). The delay cedes ground to commercial rankings, notably by U.S. News and World Report. It also complicates life for U.S. institutions with aspiring programs that look to the NRC survey to validate their progress at a time when graduate schools are facing growing competition from other nations for the world's best students.

    The holdup is a big disappointment to J. Bruce Rafert, dean of the graduate school at Clemson University in South Carolina, who persuaded his bosses to pony up additional resources to gather data from faculty, students, and staff to pass along to NRC. “I had coordinated data collection with the IT people and held a number of workshops for faculty and staff,” says Rafert. “We were fairly far into this when I heard [about the delay].”

    Some administrators aren't taking the news lying down. In a meeting last month of graduate deans, Lawrence Martin of Stony Brook University in New York proposed that universities pay an annual subscription fee to raise the necessary funds. “Of course the government has a stake,” says Martin. “But if the feds don't want to pay, then we have to do it another way. For me, it's not an option not to do it.” A modest annual fee, Martin noted, would also allow NRC to update the survey more frequently than the current rate of once every 13 years. The proposal makes a lot of sense to many deans. “It's the best suggestion that I heard at the meeting,” says Rafert.

    But other administrators are cool, if not downright hostile, to financing the survey that way. Universities would already be paying indirectly for the assessment with a sizable investment of staff time and resources, argues John Vaughn of the Association of American Universities in Washington, D.C., a coalition of 62 major research institutions in the United States and Canada. He also thinks the assessment will generate data that can help the federal government gauge the quality of the scientists whom it is supporting.

    “I think [a subscription] would be a real mistake because graduate training is a society-wide issue,” says Vaughn. “It's also a slippery slope; if universities pick up the tab for this, then the government may start looking to duck other obligations, too.” Debra Stewart, president of the Washington, D.C.-based Council of Graduate Schools, also fears that the survey's credibility could be tainted if its primary audience also pays the freight.

    Academy officials hope to meet this month with presidential science adviser John Marburger to make the case for the government's involvement. (Neither of the previous NRC surveys received federal funding, although the National Institutes of Health, the National Science Foundation, and the U.S. Department of Agriculture helped finance the methodology review that Ostriker chaired.) But they may need stronger arguments than those they've used to date.

    “The NRC survey is well-designed and likely to be an improvement on all previous assessments,” Marburger said in an e-mail to Science. “But it is more directly relevant and useful to the surveyed institutions than to the funding agencies.” One government official who has heard NRC's pitch found it lacking. “We thought that we could use the technical portion of the assessment to help us evaluate our own training programs,” says the official, who requested anonymity. “But that idea doesn't really hold up. We already get a lot of information from our grantees.” At the same time, the official added, some issues of interest to an agency may be too specialized to show up in the NRC survey.

    Although Vaughn sees NRC's suspension of the survey as a necessary evil, Martin worries that it could be the beginning of the end. “After telling people get ready, get ready for the NRC survey, now I'm sick of talking about it,” says Martin. “It's off the table, as far as I'm concerned.”

    The uncertainty has also led him to explore other ways to assess the quality of graduate education, such as mining existing databases that measure the quantity and quality of scholarly publications. “It'll provide only a subset of the whole picture,” Martin admits. “But it's something we can do on our own, inexpensively, and repeat as needed.” That's more than the NRC can offer, at least right now.


    A Genomic View of Animal Behavior

    1. Elizabeth Pennisi

    By integrating studies in genomics, neuroscience, and evolution, researchers are beginning to reveal some of the mysteries of animal behavior

    Why a dog—or a human for that matter—cuddles up with one individual but growls at another is one of life's great mysteries, one of the myriad quirks of behavior that has fascinated and frustrated scientists for centuries. Here's another: are we hard-wired to tend our young or culturally indoctrinated to have family values?

    It's no surprise that such mysteries remain unsolved. They are rooted in complex interactions between multiple genes and the environment, and the tools to tackle them have largely been unavailable until recently. But behavioral researchers are beginning to apply techniques that are transforming other areas of biology. They are using microarrays—which can track hundreds or thousands genes at once—to learn, for example, why some honey bees are hive workers and others are foragers, and what makes some male fish wimps and others machos.

    Sweet “tooth.”

    A gene that prompts roving in fruit flies also makes them more eager to sip sugar.


    They are also comparing the sequenced genomes of the growing menagerie of animals, probing whether genes known to influence behavior in one species play similar roles in others. Investigators have even gone so far as to swap gene-regulating DNA sequences between species with different lifestyles; in one case, they transformed normally promiscuous rodents into faithful partners.

    While these comparative approaches are de rigueur for evolutionary biologists, they are something new for many neuroscientists and others who typically study behavior in a single model organism, says Gene Robinson, an entomologist at the University of Illinois, Urbana-Champaign, who is trying to encourage more crosstalk between disciplines. “There is this clear gulf between people who are using modern genetic techniques to study very specific questions and the people who are studying natural diversity,” adds Steve Phelps from the University of Florida, Gainesville. But as more behavioral scientists take up the tools of genomics and comparative biology, the payoff may be a deeper understanding of the molecular basis of behavior in animals—even people—and how behaviors originally evolved. The field “is very ripe for a productive synthesis,” says Phelps.

    Foraging for genes

    As gene sequencers turn their attention to deciphering the genomes of dozens of evolutionarily diverse species, a deluge of genome data is beginning to transform some aspects of behavioral science. Instead of just probing the minutiae of how a gene works in one organism, scientists are increasingly investigating how a particular gene operates in multiple species.

    Take the story of a wanderlust gene studied by Marla Sokolowski of the University of Toronto, Ontario, Canada. Almost 25 years ago, Sokolowski and her colleagues discovered that a then unidentified gene, which they dubbed forager (for), controlled how much a fruit fly wandered. One variant of the gene makes a fly a more active forager—a “rover”—while another variant causes a fly to be less active, a “sitter.” In 1997, her team finally cloned this gene, which codes for a protein called cGMP-dependent protein kinase (PKG), an important cell-signaling molecule (Science, 8 August 1997, pp. 763, 834). The rover variant turned out to generate higher quantities of the signaling protein.

    This gene has recently proved key to feeding behavior in other invertebrates as well. In 2002, working with Sokolowski and her colleagues, Robinson and Yehuda Ben-Shahar, also from the University of Illinois, found that changes in the activity of for in honey bee brains prompted hive-bound workers to begin to change roles and start actively foraging for food. That same year, other researchers demonstrated that this gene influenced how likely nematodes were to explore their environment.

    In the May-June 2004 issue of Learning and Memory, Sokolowski and her colleagues demonstrated that the PKG gene affects another behavior—how readily fruit flies respond to sugar. Rover flies are quick to extend their probosis when exposed to sugar and continue to be stimulated by repeated exposure to sugar, while sitters gradually become used to the sweet stuff and ignore it, they reported. “It suggests that rovers may keep on searching for food because they don't [become indifferent to sugar],” says Sokolowski. This constant movement may be an evolutionary advantage for rovers in places where fruits and other foods are scattered.

    Given the apparent importance of for in the behavior of fruit flies and other species, Sokolowski and Mark Fitzpatrick from the University of Toronto, have now looked across the animal kingdom for the gene and others related to it. They searched public gene databases, and earlier this year, in the February Journal of Integrative and Comparative Biology, they reported finding 32 PKG genes from 19 species, including green algae, hydra, pufferfish, and humans. The strong sequence conservation of the genes between many species hints that they may play a role in food-related behavior in many organisms. “By studying [for] in additional species, we will find out how it modulates foraging behavior in different evolutionary scenarios,” says Sokolowski.

    The buzz about microarrays

    Comparative genomics is helping researchers pinpoint specific genes involved in some behaviors, but scientists are also using microarrays to cast a broader net. For example, Robinson, behavioral geneticist Charles Whitfield, and their colleagues at the University of Illinois are using these gene expression monitors to study honey bee behavior. They first used microarrays to look at the differences, beyond the PKG gene, that distinguish bees that tended the hives from bees that left the hive for pollen (Science, 10 October 2003, p. 296). Of the 5500 genes examined, they found 2200 whose brain activity varied between the two types of bees.

    Now they have begun to tease out the role of the hive environment in stimulating “nurse” or “forager” genetic regimes—finding genes that help regulate the PKG gene's activity. They raised newly emerged bees with no exposure to other bees, then used microarrays to test how certain chemicals known to change bee behavior alter the isolated insects' genetic activity. Last year, Christina Grozinger, now at North Carolina State University in Raleigh, showed that a hormone produced by the queen bee shifted gene expression toward the nurse profile, possibly by suppressing the for gene. Ben-Shahar conducted a similar experiment using a hormone that promotes foraging behavior. About half of the genes in the isolated bees shifted in a forager-like direction—and those typically active in hive worker bees turned off.

    “We had no genes going in the wrong direction,” says Whitfield. Now he and his colleagues are looking at gene expression patterns in bees that either build combs or remove dead bees from a hive. The effort may provide a handle on which genes might promote these construction and undertaker behaviors.

    Neurobiologist Hans Hofmann of Harvard University uses microarray technology to probe the behavior of fish. He's investigating the genetic basis for the presence of studs and social outcasts among male cichlids. Some macho males sport bright colors, bully their peers, and court females. Others, the wimps, have small gonads and spend most of their time feeding or swimming in schools with other wimps. In certain circumstances, however, wimps become studs and vice versa, switches that seem to be driven by changing environments.

    Close contact.

    Overly friendly mutant mice helped clarify the genetic pathway involved in reactions to strangers.


    In the traditional approach, Hofmann would have tried to track individual genes involved in these transformations. Instead, he turned to microarrays and, in less than a year, has identified 100 genes that likely shape the male's social status. Some are genes that Hofmann had expected to be involved, but others, such as a number for ion channels, were surprises. He and his colleagues are now looking more closely at cichlid brains for differences in expression patterns between genes identified in the array studies. “Some of these genes that we decided to follow up, we would not have looked at without this approach,” Hofmann notes.

    For both Robinson and Hofmann, microarrays have changed the way they investigate genes and behavior. In the pre-genomics era, both chased after candidate genes—those they had reason to suspect were important. But that tunnel vision “doesn't give you a perspective of how many other [genes] are involved,” Whitfield explains.

    Pathways to behavior

    The genetic bounty provided by microarrays poses its own challenges, however. The devices can turn up many genes involved in even a simple behavior, and the molecules those genes encode need to be tied together into a logical pathway. Piecing together that jigsaw puzzle is no easy task.

    Elena Choleris from the University of Guelph has taken on that challenge and has worked out the relatively simple pathway underlying one behavioral response in a rodent. She, Martin Kavaliers at the University of Western Ontario, London, Canada, and Don Pfaff from Rockefeller University in New York have shown the genetic interactions necessary for one mouse to recognize another and to react in a friendly or unfriendly manner.

    Social status.

    It takes many genes to transform hive workers into foragers.


    Researchers have known for several years that at least four proteins are involved in this process of social recognition: two estrogen receptors, located in different parts of the brain, and a neuropeptide, oxytocin, and its receptor. Choleris looked at the interplay of these molecules by breeding mutant mice lacking each component. In different groups of mice, she and her team disabled one of the genes encoding the receptors or oxytocin. No matter the genetic defect, the outcome was the same: The mutant mice couldn't tell a familiar mouse from a stranger and were no longer worried about newcomers.

    From additional experiments, Choleris has deduced some of the protein connections in what she calls a micronetwork, or micronet: One of the estrogen receptors controls oxytocin production in the hypothalamus, while the other receptor works in the amygdala to control the production of oxytocin's receptor. If any component of this micronet is interrupted, the whole pathway breaks down. The micronet exemplifies “how multiple genes act in parallel in an orchestrated manner between different systems and different brain areas,” says Choleris. In the wild, a breakdown of this particular micronet and the resulting social recognition deficits could have powerful implications. Choleris and colleagues have recently found that her mutant mice have a diminished ability to sense and stay away from nearby mice carrying parasites, for example.

    Beyond the gene

    Microarrays are powerful tools for spotting genes that underlie different behaviors, but the way those genes are regulated may be just as important as the proteins they produce. Take the case of the prairie vole and the meadow vole.

    The prairie vole (Microtus ochrogaster) is faithful to its mate; meadow voles (Microtus pennsylvanicus) are not. Yet the DNA sequence for vasopressin, the neuropeptide governing this trait, is the same in both species, as is the sequence of the gene for the hormone's receptor protein. There are, however, significant species differences in the number of brain receptors for vasopressin: Prairie voles have a lot more.

    In 1999, Larry Young, a neuroscientist at Emory University in Atlanta, Georgia, noticed that a regulatory region, a DNA sequence that sits at the beginning of the receptor gene, was longer in the monogamous species. When he put the prairie vole's vasopressin receptor gene and its regulatory region into mouse embryos, the resulting adult rodents were more faithful than is typical for that particular mouse species. The same has now proved true for meadow voles, he and his colleagues reported in the 17 June Nature. When he put the full prairie vole gene, including the regulatory region, for this receptor into meadow voles, males abandoned their promiscuous ways and began acting like faithful prairie voles.

    Michael Meaney from McGill University in Montreal, Quebec, has found that a different regulatory region, called a promoter, is pivotal in another social relationship, the one between parents and their offspring. In the early 1990s, he and others had demonstrated that when a mother rat fails to lick and groom her newborn pups, those pups grow up timid and abnormally sensitive to stress.

    Mother's touch.

    Standoffish mother rats cause chemical changes in DNA bases that make pups timid adults.


    The key seems to be methylation, a process in which DNA sequences are chemically modified by the addition of methyl groups to cytosine bases. This often suppresses the activity of a gene. Meaney's team discovered that in mice, a mother's behavior alters the typical methylation of the promoter for the gene for the glucorticoid receptor in her offspring. In the brain, this receptor protein helps set off the cascade of gene expression that underlies the stress response.

    Before birth, there's no methylation of this gene promoter. But in mice neglected by their mothers, the promoter is methylated shortly after birth, Meaney and his colleagues reported in the 27 June online Nature Neuroscience. This increased methylation causes less of the receptor to be produced, creating anxious animals. And because DNA methylation tends to last the life of the animal, it could explain why the pups' personalities don't change as they mature, Meaney notes.

    All-out gene search.

    Microarrays (left) are helping to uncover the genes that make some male cichlids more macho (lower fish) than others.


    While most behavioral genetics researchers have concentrated on non-human species, some are now slowly venturing into the murky waters of human behavior. Meaney's team, for example, is following 200 mothers and their children, looking at the interplay between maternal care and activity in key genes in the offspring. “The extent to which researchers are finding similar patterns” between animals and people is quite promising, notes Stephen Suomi, a psychologist at the National Institute of Child Health and Human Development, Laboratory of Comparative Ethology, Bethesda, Maryland.

    These patterns are prompting new research alliances. Genes can represent a common ground, increasing “the links between individuals interested in [neural] mechanisms and the people who are interested in behavior,” explains Andrew Bass, a neuroethologist at Cornell University in Ithaca, New York. With this common ground will come a greater understanding of the brain as it relates to behavior, says Pfaff. And that, he adds, “is exciting to the nth degree.”


    Source of New Hope Against Malaria is in Short Supply

    1. Martin Enserink

    New drugs based on an old Chinese cure could save countless lives in Africa, if health agencies and companies can find ways to make enough

    It seemed like a classic case of bait and switch. In 2004, the World Health Organization (WHO) and the Global Fund for AIDS, Tuberculosis, and Malaria threw their weight behind a radical change in the fight against malaria in Africa. Old, ineffective drugs were to be abandoned in favor of new formulations based on a compound called artemisinin that could finally reduce the staggering death toll. More than 20 African countries have signed on. But the catch is there aren't nearly enough of the new drugs to go around.

    Just before Christmas, WHO—which buys the tablets from Novartis for use in African countries—announced that it would deliver only half of the 60 million doses anticipated in 2005, leaving many countries in the cold. “It's a very cruel irony,” concedes Allan Schapira of WHO's Roll Back Malaria effort.

    Other companies producing the drugs have the same problem as Novartis. Artemisinin is derived from plants grown primarily on Chinese and Vietnamese farms, and they have not kept up with demand. Several plans are afoot to create a new, more stable, and cheaper source. Last month, for instance, the Bill and Melinda Gates Foundation announced a $40 million investment in a strategy to make bacteria churn out a precursor to artemisinin. But such alternatives will take at least 5 years to develop, so the shortages are likely to persist, warns Jean-Marie Kindermans of Médécins sans Frontières in Brussels.

    Fields of gold.

    Extracts of Artemisia annua (right) provide powerful new malaria drugs, but farms have not met demand for the shrub.


    New malaria drugs are badly needed. The parasite Plasmodium falciparum has developed resistance to the mainstays, such as chloroquine and sulfadoxine-pyrimethamine. The death toll—more than a million annually—is not declining, despite Roll Back Malaria, an ambitious international campaign launched in 1998 to halve mortality by 2010.

    Enter Artemisia annua (also known as sweet wormwood or Qinghao), a shrub used for centuries in traditional Chinese medicine. In the 1970's, Chinese researchers discovered that its active ingredient, artemisinin, kills malaria parasites; since then, several chemical derivatives with slightly better properties have been developed. Known by names such as artemether or artesunate, they cure more than 90% of patients within several days, with few side effects observed so far. Best of all, no resistance has been seen yet. To keep it that way, WHO and others recommend that artemisinin compounds always be used with a second drug in a so-called Artemisinin-based Combination Therapy, or ACT.

    Widely used in Asia, the introduction of ACTs in Africa has lagged. Countries have been reluctant to make the switch because, at about $2.40 per treatment course, ACTs are 10–20 times more expensive than existing drugs. The Global Fund has also dragged its feet, some allege, by funding the purchase of older, cheaper drugs for too long. Things began to change when an expert group published a scathing letter in The Lancet in January 2004, accusing the Global Fund and WHO of “medical malpractice.” Both organizations denied the claims, explaining that they supported ACTs but that change took time. Both also concede that the ensuing debate spurred them to redouble their efforts.

    But companies are reluctant to produce the drugs, as are farmers to grow Artemisia, without guarantees that they'll sell—and that's the problem. The Global Fund does not have nearly enough money to fund the drugs' introduction across Africa. Donor countries like the U.S. and the U.K. appear reluctant to spend aid money on market guarantees for big pharma, says Schapira, because it could be seen as lining shareholders' pockets; at an emergency session at WHO just before Christmas, no donors made any commitments.

    WHO's hope is that growing demand will eventually create a stable artemisinin supply at low prices. Artemisia farms are now springing up in India, and WHO is supporting experiments to grow the plants in east Africa.

    The Gates Foundation is banking on a less fickle supply route. Over the past 10 years, chemical engineer Jay Keasling and colleagues at the University of California, Berkeley, have spliced nine genes into Escherichia coli bacteria to make them produce terpenoids, a class of molecules that includes artemisinin. With a few genes borrowed from Artemisia, they should be able to produce an artemisinin precursor, Keasling says.

    On 13 December, the foundation announced a $42.6 million grant to the Institute for OneWorld Health in San Francisco—which bills itself as the world's first non-profit pharmaceutical company—to help Keasling finish the engineering. Then a biotech startup will optimize the process for producing artemisinin—“tons and tons of it,” says OneWorld Health president Victoria Hale—about 5 years from now. Her assumption is that pharmaceutical companies will package OneWorld's artemisinin derivates into ACT tablets and sell them at well under a dollar per treatment.

    There's another alternative. Jonathan Vennerstrom and colleagues at the University of Nebraska, Omaha have synthesized a compound called OZ277 (or simply OZ) that, like artemisinin, has a peroxide bridge shielded by large chemical rings. The compound has been tested as an antimalarial in vitro and in animals, and it looks even better than the real thing, Vennerstrom and colleagues reported in Nature in August. Ranbaxy, an Indian pharmaceutical company, is developing it further; a phase 1 safety trial has just been completed.

    Ideally, 4 or 5 years from now, OZ will result in new drug combinations that have the power of current ACTs but cost less than a dollar per treatment, says Chris Hentschel, chief executive of the Medicines for Malaria Venture (MMV), a non-profit based in Geneva that supports its development. Still, Hentschel is trying to temper his optimism. Drugs can always fail during testing, and even ACTs may eventually lose their efficacy, like almost every malaria drug before. That's why, despite the new hope, MMV has its pipeline well-stocked with unrelated candidates.


    Oldest Civilization in the Americas Revealed

    1. Charles C. Mann

    Almost 5000 years ago, ancient Peruvians built monumental temples and pyramids in dry valleys near the coast, showing that urban society in the Americas is as old as the most ancient civilizations of the Old World

    BARRANCA, PERU—A few miles northeast of this small fishing town, the Pan-American Highway cuts through a set of low, nondescript hummocks in the narrow Pativilca River valley. If they were so inclined, the truckers thundering along the road could spot on the hillocks the telltale signs of archaeological activity—vertical-sided cuts into the earth surrounded by graduate students with trowels, brushes, tweezers, plastic bags, and digital cameras.

    Gourd lord.

    This piece of gourd reveals a figure (shown in false color, inset) carved about 2250 B.C.E. in the Norte Chico region.


    The Pativilca, about 130 miles north of Lima, is one of four adjacent river valleys in the central Peruvian seacoast known collectively as the Norte Chico, or Little North (see map below). Pinched between rain shadows caused by the high Andes and the frigid Humboldt Current offshore, this is one of the driest places on earth; rainfall averages 5 cm a year or less. Because of the exceptional aridity, ancient remains are preserved with startling perfection. Yet the same aridity long caused archaeologists to ignore the Norte Chico, because the region lacks the potential for the full-scale agriculture thought to be necessary for the development of complex societies.

    Then in the 1990s, groundbreaking research directed by archaeologist Ruth Shady Solis of the Universidad Nacional Mayor de San Marcos established that such societies had existed in the Norte Chico in the third millennium B.C.E., the same time that the Pharaohs were building their pyramids (Science, 27 April 2001, p. 723). And in the 23 December issue of Nature—in what archaeologist Daniel H. Sandweiss of the University of Maine at Orono describes as “truly significant” work—archaeologists Jonathan Haas of the Field Museum in Chicago and Winifred Creamer and graduate student Alvaro Ruiz of Northern Illinois University in DeKalb reported the startling scope of the Norte Chico ruins, which include “more than 20 separate residential centers with monumental architecture,” and are one of the world's biggest early urban complexes. The ruins are dominated by large, pyramid-like structures, presumably temples, which faced sunken, semicircular plazas—an architectural pattern common in later Andean societies. The new work includes 95 radiocarbon dates that confirm the great antiquity of this culture, which emerged about 2900 B.C.E. and survived until about 1800 B.C.E.

    The concentration of cities in the Norte Chico is so early and so extensive, the archaeologists believe, that coastal Peru must be added to the short list of humankind's cradles of civilization, which includes Mesopotamia, Egypt, China, and India. Yet the Peruvian coast, as Shady has argued, is in some ways strikingly unlike the others. She points out that most of the Eurasian centers “interchanged goods and adaptive experiences,” whereas the Norte Chico “not only developed in isolation from those [societies], but also from Mesoamerica, the other center of civilization in the Americas, which developed at least 1500 years later.” The result, according to Haas, is that the Norte Chico provides a laboratory in which to observe “that most puzzling phenomenon, the invention of the state.” The people of this ancient, isolated society, says Haas, “had no models, no influences, nobody to copy. The state evolved here purely for intrinsic reasons.”

    Cities without farms

    Although the Norte Chico mounds were flagged as possible ruins as far back as 1905, researchers never excavated them because, according to Ruiz, “they didn't have any valuable gold or ceramic objects, which is what people used to look for.” The first full-scale excavation took place in 1941, when Gordon Willey and John M. Corbett of Harvard discovered a single multiroomed building at Aspero, a salt marsh at the mouth of the Supe River. Puzzled by what seemed to be an isolated structure, the team took 13 years to publish their data.

    Willey and Corbett also noted a half-dozen odd “knolls, or hillocks,” which the two men described as “natural eminences of sand.” Thirty years later, in the 1970s, Willey returned to Aspero with archaeologist Michael E. Moseley, now at the University of Florida at Gainesville. They quickly established that the site actually covered 15 ha and that the natural knolls were, in truth, “temple-type platform mounds.” It was “an excellent, if embarrassing, example,” Willey later wrote, “of not being able to find what you are not looking for.” When carbon dating revealed that the site was very old, Moseley says, “it became obvious that Aspero was something big and important.”

    It was also a conundrum. All complex Eurasian societies developed in association with large river valleys, which offered the abundant fertile land necessary for agriculture. And social scientists have long believed that the organization of labor necessary for agriculture was the wellspring of civilization. Aspero, on a little river that coursed through a desert, had almost no farmland. “We asked, ‘How could it sustain itself?’” Moseley says. “They weren't growing anything there, or almost anything.”

    The question prompted Moseley in 1975 to draw together earlier work by Peruvian and other researchers into what has been called the MFAC hypothesis: the maritime foundations of Andean civilization. He proposed that there was little agriculture around Aspero because it was a center of fishing, and that the later, highland Peruvian cultures, including the mighty Inca, all had their origins not in the mountains but in the great fishery of the Humboldt Current, still one of the world's largest. Bone analyses show that late-Pleistocene coastal foragers “got 90% of their protein from the sea—anchovies, sardines, shellfish, and so on,” says archaeologist Susan deFrance of the University of Florida, Gainesville (Science, 18 September 1998, pp. 1830, 1833). “Later sites like Aspero are just full of [marine] fish bones and show almost no evidence of food crops.” The MFAC hypothesis, she says, boils down to the belief “that these huge numbers of anchovy bones are telling you something.”

    Despite its explanatory power, the hypothesis had to be modified when Shady began work at Caral, almost 23 kilometers upriver from Aspero. One of 18 sites with monumental and domestic architecture found by Shady's team, Caral covered 60 ha and was, in Shady's view, a true city—a central location that provided goods and services for the surrounding area and was socially differentiated, with lower-class barrios in the periphery and elite residences with painted masonry walls in the center. Dating to about 2800 B.C.E., Shady says, Caral was dominated by a pyramid bigger than a football field at the base and more than seven stories high, overlooking a plaza bordered by smaller monumental structures. The big buildings suggested a large resident population, but again there were plenty of anchovy bones and little evidence of subsistence agriculture. The agricultural remains were mainly of cotton, used for fishnets, and the tropical tree fruits guayaba and pacae. All were the products of irrigation. At the Norte Chico, the Andes foothills jut close to the coast, creating the sort of swiftly dropping rivers that are easiest to divert into fields.

    Ancient cities.

    Archaeologists have uncovered surprisingly extensive sites in arid river valleys near the Peruvian coast, including mounds in the Fortaleza Valley.


    To Moseley, the abundance of fish bones at Caral suggested that the ample protein on the coast allowed people to go inland and build irrigation networks to produce the cotton needed to expand fishing production. Caral thus lived in a symbiotic relationship with Aspero, exchanging food for cotton.

    The making of a state

    The central structures in Norte Chico cities were constructed in what Haas believes to have been sudden bursts of as few as two or three generations. The buildings were made largely by stacking, like so many bricks, mesh bags filled with stones. So perfect is the preservation that the Peruvian-American team can remove 4,000-year-old “bricks” from the pyramids almost intact, the cane mesh around them still in place. (Along with food remains, the mesh provided many of the samples used for carbon dating.) But the impressive size of the monuments is not matched by a rich material culture; the Norte Chico society existed before ceramics.

    According to Creamer, Haas, and Ruiz, the sheer scale of the inland sites raises a major challenge to the MFAC hypothesis. “The great bulk of the population lived inland in these cities,” Creamer says. “If there were 20 cities inland and one on the coast, and many of the inland cities are bigger than the coastal city, the center of the society was inland.”

    But defenders of the MFAC hypothesis remain convinced that the coastal areas were of primary import. “What may be important,” says deFrance, is not the scope of the society “but where it emerged from and the food supply. You can't eat cotton.”

    Whether maritime or inland cities developed first, it seems clear that each depended on the other, and Haas says that this interdependency has major implications. “If I look beyond Aspero at this time, what I see is a bunch of fishing sites all up and down the Peruvian coast. All of them have cotton, but they are on the coast where they can't really grow it. And then you have one big gorilla inland—a concentration of inland sites that are eating anchovies but can't obtain them themselves. It's a big puzzle until you put them together. … I believe we are getting the first glimpses of what may be the growth of one of the world's first large states, or something like it.”

    In archaeological theory, societies are often depicted as moving from the kin-based hierarchy of the band to the more abstract authority of the state in order to organize the defense of some scarce resource. In the Norte Chico, the scarce resource was presumably arable land. Haas, Creamer, and Ruiz think that the land was more valuable than generally believed, and that agriculture was more important than allowed for in the MFAC hypothesis. Luis Huaman of the Universidad Peruana Cayetano Heredia in Lima is examining pollen from the Norte Chico sites and promises that “we will see when agriculture came in and what species were grown there.” Regardless of the results, though, the cities of the Norte Chico were not sited strategically and did not have defensive walls; no evidence of warfare, such as burned buildings or mutilated corpses, has been found. Instead, Haas, Creamer, and Ruiz suggest, the basis of the rulers' power was the use of ideology and ceremonialism.

    “You have lots of feasting and drinking at these sites,” Haas says. “I have the beginning of evidence that there are the remains of feasts directly incorporated into the monuments, the food remains themselves and the hearths from cooking all built into it.” Building and maintaining the pyramids—so unlike anything else for thousands of miles—was the focus of communal spiritual exaltation, he suggests. A possible focus for the religion is the curious figure Creamer found incised on a gourd. Dated to 2250 B.C.E., it resembles in many ways later Peruvian deities, including the Inca god Wiraqocha, suggesting that the Norte Chico may have founded a religious tradition that existed for almost 4000 years.

    Despite their excitement about the new work, MFAC backers see no reason yet to give up the belief that, as Sandweiss puts it, “the incredibly rich ocean off this incredibly impoverished coast was the critical factor.” Only the upper third of Aspero has been excavated, notes deFrance, and its emergence has never been properly dated. If coastal Aspero, though much smaller than the inland cities, is also much older, the MFAC hypothesis might be supported. With Moseley, Shady's team is hoping to resolve the debate by digging to the bottom of Aspero next summer. Meanwhile, Haas, Creamer, and Ruiz have bought a house in Barranca for a laboratory and barracks. They plan to work in the area for years to come. “You're going to be hearing a lot more about the Norte Chico,” Ruiz promises

  14. ETHICS

    Is Tobacco Research Turning Over a New Leaf?

    1. David Grimm

    Scientists developing reduced-harm tobacco products increasingly rely on tobacco industry funding, but some universities and grant organizations want to forbid it

    A 65-year-old man sitting at a small table in a lab at Duke University Medical Center in Durham, North Carolina, asks for a cigarette, his twelfth in less than eight hours. A researcher is happy to oblige. As the man lights up, a swarm of technicians buzz around him, drawing blood from a catheter in his arm, making him exhale into a sensor, and administering cognitive tests.

    The experiment, led by neuroscientist Jed Rose, focuses on the volunteer's response to a cigarette called Quest, made from tobacco that's been genetically engineered to contain less nicotine. Rose directs the university's Center for Nicotine and Smoking Cessation Research, dedicated to helping smokers kick the habit. He sees the Quest study as an important step in the center's mission because it indicates that smokers of this new product inhale less deeply than smokers of an earlier “reduced-harm” product—the low-tar cigarette—and may therefore be able to decrease their dependence on tobacco. But the work is controversial. Quest's maker, the Vector Tobacco Company of Research Triangle Park, North Carolina, paid for the study, and tobacco giant Philip Morris funds the center.

    Since the late 1990's the tobacco industry has provided university researchers with millions of dollars to help develop a new class of reduced-harm products—including modified cigarettes like Quest, tobacco lozenges, and nicotine inhalation devices—ostensibly to reduce the hazards of smoking. Advocates say the industry has turned over a new leaf and is now serious about improving the safety of its products. But critics, who cite the industry's efforts to manipulate science over the past 50 years, see nothing but the same old smoke and mirrors.

    Anti-smoking activists tried to stop tobacco's research juggernaut more than a decade ago—and won some battles. But industry funding is flourishing, igniting a debate on some campuses over whether universities should ban tobacco money and whether grant organizations should deny funding to individuals or schools that take this money—as Britain's Wellcome Trust already does and the American Cancer Society is about to do.

    Burning issue.

    University of Nebraska's Stephen Rennard says bans on tobacco industry funding violate academic freedom.


    It's not a simple issue, says Ken Warner, a public health expert at the University of Michigan, Ann Arbor, and president of the Society for Research on Nicotine and Tobacco. He concedes that the tobacco industry was guilty of misconduct in the past but worries about restricting research. “How do you avoid infringing on academic freedom, and what sort of slippery slope do you create by denying grants on moral grounds?” he asks. “There is a real need for reduced-harm research. The question is, given their history, do we let the tobacco companies fund it?”

    Moral dilemma

    Duke University's Rose thinks the tobacco industry's new focus on harm reduction may usher in a healthier era of tobacco-sponsored research. This research is “high quality, innovative, and unique,” he says, and “very different from the abuses of the past.” He adds, “None of the companies that fund our studies have made any attempt to bias our work.”

    Rose, a co-inventor of the nicotine patch, argues that vilifying the industry won't help the millions of smokers who are trying to quit. “The real enemy is the death and disease smokers suffer,” he says. “If we can use tobacco money to help people lead healthier lives, why shouldn't we?”

    Stephen Rennard, a pulmonary physician at the University of Nebraska Medical Center in Omaha who also receives tobacco industry support, agrees. “I approach this from a public health perspective,” he says. “People are going to continue to smoke, and we need to make them as safe as we can. The tobacco industry needs university research to develop a safer product.”

    One of Rennard's projects, funded by RJ Reynolds, evaluated Eclipse—a standard-looking cigarette manufactured by the company that heats rather than burns tobacco, theoretically producing less harmful smoke. Rennard later used Philip Morris money to determine how much smoke the average cigarette user is exposed to. The findings may help the company design a cigarette that reduces the levels of inhaled smoke.

    Still, Rennard says that taking industry money required a lot of soul searching. “But in the end I realized that this research should be funded by tobacco companies. NIH resources should not be used to improve cigarettes. It would be like the government subsidizing the development of a better laundry detergent.”

    “It's trendy to beat up on the tobacco industry,” Rennard adds. “It's simplistic, and it doesn't help the people who need to be helped. If we delay this research because of concerns about tobacco funding, it could be years before these potentially life-saving products make it to market. That would be the real tragedy.”

    Smoky past

    Others think academic researchers should just say no to tobacco money. Simon Chapman, editor of the journal Tobacco Control and a professor of public health at the University of Sydney in Australia, says that despite their new efforts to support harm reduction studies, the tobacco companies have little interest in public health. “They fund this research to buy respectability and ward off litigation,” he says. Some worry that reduced-harm products are just a ploy to keep smokers addicted. Chapman says that scientists need only look at current examples of tobacco company malfeasance—from targeting youth smokers in Myanmar to using athletes to promote cigarettes in China—to see that the companies haven't changed their ways.

    For many critics of mixing tobacco money with university research, the industry's history speaks for itself. For example, as the link between smoking and disease became clearer in the early 1950's, the world's largest tobacco companies established the Tobacco Industry Research Committee (TIRC)—later the Council for Tobacco Research (CTR)—to fund research into the health effects of smoking. But its main goal, internal company documents now reveal, was to obfuscate risks, and few of the studies it funded addressed the hazards of cigarettes (Science, 26 April 1996, p. 488).

    Harm reducer?

    RJ Reynold's Eclipse heats rather than burns tobacco, theoretically producing less harmful smoke.


    “During the four decades they operated, TIRC and CTR never came to the conclusion that smoking causes cancer,” says Michael Cummings, the director of the Tobacco Control Program at the Roswell Park Cancer Institute in Buffalo, New York. “These organizations were more about public relations than science.” The industry agreed to shut down CTR in 1998 as part of an agreement—known as the Masters Settlement—that also awarded 46 U.S. states $206 billion in compensation for the cost of treating smoking-related illnesses.

    But CTR wasn't the only problem. Government prosecutors have charged that the companies frequently killed their own research when it came to unfavorable conclusions, funded biased studies designed to undermine reports critical of smoking, and used the names of respected scientists and institutions to bolster their public image. The industry also lost credibility with its previous attempts at harm reduction when it touted low-tar and filtered cigarettes introduced in the 1950's and '60's as “safer,” says Chapman, while suppressing evidence that smokers drew harder on these cigarettes, thereby increasing their uptake of carcinogens. These charges are being revisited in an ongoing federal racketeering case—the largest civil lawsuit in American history—alleging a 50-year conspiracy by the tobacco industry to mislead the public about the dangers of smoking. For its part, the industry argues that it has reformed; Philip Morris spokesperson Bill Phelps says his company believes that investing in research is the best way to address the health risks associated with smoking.

    Richard Hurt, the director of the Nicotine Dependence Center at the Mayo Clinic in Rochester, Minnesota, says researchers considering industry money should remember the toll extracted by tobacco use—4.9 million deaths per year worldwide, according to World Health Organization estimates. “For anyone interested in public health, taking this money is a clear conflict of interest,” he says.

    Academic freedom

    While scientists debate the merits of taking tobacco money, other authorities may take the decision out of their hands. Over the past decade, a number of institutions—including the Harvard School of Public Health and the University of Glasgow—have prohibited their researchers from applying for tobacco industry grants. In addition, organizations such as Cancer Research U.K. and the Wellcome Trust will no longer fund researchers who take tobacco money. The American Cancer Society, one of the largest private funders of cancer research, plans to adopt a similar policy this month.

    No sale.

    University of Sydney's Simon Chapman says the tobacco industry wants to buy researchers' credibility.


    Ohio State University, Columbus, found itself in the eye of the storm in 2003 when Philip Morris offered a medical school researcher a $590,000 grant at the same time a state foundation offered a nursing school researcher a $540,000 grant. Because the terms of the state grant would have prohibited all other university researchers from taking tobacco money, the school could not accept both. “There was a very heated debate among the faculty,” says Tom Rosol, the university's senior associate vice president for research, who ultimately made the decision to take the Philip Morris grant. “It came down to the issue of academic freedom,” he says. “We didn't want to accept a grant that would have placed restrictions on our investigators.” Rosol's decision sparked a backlash, and several departments, including the Comprehensive Cancer Center and the School of Public Health, enacted tobacco funding bans, barring researchers from taking tobacco money in the future.

    A resolution approved by the University of California's (UC) Academic Senate this summer would have the opposite effect. Stating that “no special encumbrances should be placed on a faculty member's ability to solicit or accept awards based on the source of funds,” the proposal would forbid any institutions within the UC system from banning tobacco funding. In a letter endorsing the resolution, UC president Robert Dynes describes such bans as “a violation of the faculty's academic freedom.”

    Not everyone buys the academic freedom argument. “The university should be a role model,” says Joanna Cohen, an expert on university tobacco policies at the University of Toronto. “Academic freedom should not override its ethical responsibilities.”

    Nor does the American Legacy Foundation, a Washington, D.C., tobacco education and funding organization established by the Masters Settlement, have any qualms about denying grants to institutions that take tobacco money. “We don't see this as an academic freedom issue,” says Ellen Vargyas, the foundation's general council. “The tobacco industry has a bad history, and this is our way of encouraging institutions not to take their money.”

    The University of Nebraska's Rennard, who made himself ineligible for state money by accepting tobacco industry funds, finds these policies and the university bans deeply flawed. “Political positions should not determine scientific agendas,” he says. “If we restrict research on moral grounds, should we ban grant money from pharmaceutical companies or industries that pollute the environment? Where do you draw the line?”

    As public funding gets tighter, more universities may find themselves confronting this question. The tobacco industry is poised to fill the financial void, but continued charges of company malfeasance will increase the pressure on schools to shun this money. At the end of the day, institutions will have to decide whether to overlook the source of this funding, or take the moral high road and watch it go up in smoke.

  15. As the Galaxies Turn

    1. Robert Irion

    Spiral disk galaxies, serene icons of the universe, are hardy survivors of a battering cosmic history

    Gravity conspires to produce two dominant shapes in astronomy: spheres and disks. Both are on display in spiral galaxies, home to perhaps half the stars in the universe. Spherical central bulges of old yellow suns glow serenely, girdled by a disk consisting of curved arms of hot new stars and dark bands of dust. Such grand stellar disks, long the pinups of astronomy buffs, now play a starring role in studies of how galaxies have evolved.

    Surveys with the Hubble Space Telescope reveal a panoply of disks, only hinted at from the ground, that existed when the universe was less than half its current age. By dating and classifying this huge population, astronomers are recognizing that spiral galaxies are not delicate flowers that have blossomed slowly to their current display. Instead, they are tough perennials that have survived mergers with smaller galaxies and—on occasion—crushing collisions with big ones throughout billions of years of cosmic time.

    In our edge-on view of the Milky Way's plane, we gaze upon just such a stalwart bisecting the night, one that undoubtedly consumed other galaxies. The Milky Way's disk provides clues to this history, but the sleuthing is tough because we're embedded within it. “We have an opportunity to understand it at a much deeper level than other galaxies, because we can measure the motions of individual stars,” says astronomer Heidi Jo Newberg of Rensselaer Polytechnic Institute in Troy, New York. “But we're really just starting.”

    It's all in the gas

    The disks we see today took a long time to develop. “Almost all star formation was in clumps and chaotic structures” for roughly the first 4 billion years of cosmic history, says astronomer Sidney van den Bergh of the Dominion Astrophysical Observatory in Victoria, British Columbia. But during the next 1 billion to 2 billion years, recognizable features started to form under the inexorable pull of gravity.

    Astronomers believe that a typical primitive galaxy was a bloated cloud, slowly rotating and rich in warm gas that had not yet coalesced into many stars. Energy escaped from the cloud as atoms and molecules collided and radiated light. Gravity pulled the cooling gas more tightly together, forcing more frequent collisions, but it would have kept its original angular momentum. As time marched on, the fledgling galaxy flattened and spun faster and faster.

    “The final state of a runaway collapse is a thin disk where all particles go in exactly circular orbits,” says astrophysicist Julio Navarro of the University of Victoria, British Columbia. But a galaxy isn't an idealized whorl of gas, he notes: “When the gas collects into tiny little packets of stars, you get a collection of bullets that never collide.” Without energy-robbing collisions, a star-filled disk cannot settle down if it gets perturbed by another young galaxy plunging into it—a common event in the cosmic past. Instead, stars tend to scatter into spherical swarms, like a disturbed hive of bees.

    This is exactly what happened when astronomers constructed computer simulations of evolving galaxies dominated by stars. “Disks are very fragile, dynamical entities. Mergers mess them up,” Navarro says. But if mergers and collisions were so common in the early universe, why don't we see the sky full of formless elliptical galaxies?

    The influence of gas is the key, Navarro and others now agree. Effervescent gas would have damped out the otherwise shattering effects of major mergers. Adolescent galaxies could have kept gas stirred up in plenty of ways: intense ultraviolet light from massive newborn stars, shock waves from supernova explosions, or outpourings of energy from vigorous cores.

    Recent simulations have shown this damping effect of gas in action.


    An infrared view toward the Milky Way's core reveals a central bulge of stars and the flat disk within which we live.


    For instance, a team led by graduate student Brant Robertson of Harvard University in Cambridge, Massachusetts, produced one of the first realistic disk galaxies in a simulation that spans cosmic history. The model, reported in the 1 May Astrophysical Journal, relies on a “multiphase gas” of cold clouds surrounded by hotter material, which more accurately captures a galaxy's interstellar environment. This hybrid recipe preserves gas during mergers and stabilizes the disk against external on- slaughts, Robertson says. The approach works, but it's only a start: Just 1 of 20 simulated galaxies ended up with a flat pinwheel of stars and gas, compared with about half in the real universe. Improved models may need to churn up the gas even more with early bouts of star formation, other researchers believe.

    And in new work submitted to Astrophysical Journal Letters, two of Robertson's co-authors demonstrate that a classic spiral galaxy can emerge even from the wreckage of a violent collision. Astrophysicists Volker Springel of the Max Planck Institute for Astrophysics in Garching, Germany, and Lars Hernquist of Harvard plowed two simulated gas-rich disks into each other. The concussive impact sparked a blaze of star birth, but enough gas remained to settle the merged object into a flat superdisk with clear spiral arms. “If disks can ‘survive’ even major mergers, they are probably less fragile than previously thought,” the researchers write.

    Forty thousand personalities

    Simulations are an alluring way to peer back to galactic youth, but nothing beats the real thing. Enter GEMS—Galaxy Evolution From Morphology and Spectral Energy Distributions—an ambitious program to deduce how overall populations of galaxies have evolved. GEMS studies about 40,000 galaxies in the Hubble Space Telescope's largest contiguous color image of the sky: 150 times as broad as its “deep field” image taken in 1994. Astron-omers have good distance estimates to some 10,000 of those galaxies, from spectra obtained at the European Southern Observatory's 2.2-meter telescope at La Silla, Chile.

    For astronomers, GEMS has been as transforming as seeing a photo album of hundreds of ancestors rather than just a few faded snapshots. “From the ground, these galaxies are dots. But from Hubble, each one gets a personality,” says lead scientist Hans-Walter Rix of the Max Planck Institute for Astronomy in Heidelberg, Germany.

    After more than a year, the GEMS team can make firm statements about the life and times of disks since the universe was about 5 billion years old. For example, the team charted the hottest starlight from newborn suns. “For the last 8 billion years, by far the largest majority of stars have formed in disk galaxies that start to resemble our Milky Way,” Rix says. In contrast, elliptical galaxies had their heyday of spawning stars billions of years earlier.

    At the outer reaches of its survey, the team sees what Rix calls “a sufficient number of galaxies with a bulge in the middle and small disks around them.” These objects, he says, are most likely the ancestors of large disk galaxies such as the Milky Way and nearby Andromeda. Moreover, such galaxies grew their disks from the inside out, a maturation that the team traces by comparing the sizes of disks to their distances from us. Today's biggest disks clearly avoided catastrophic disruption from large mergers within the last 8 billion years, Rix says.

    The right neighborhood was important. Galaxies evolved more quickly if lots of others were close by, presumably driven by the stronger gravitational influences. “In dense knots, we find some disk galaxies at early times that appear like the Milky Way today, but they are premature,” Rix says. “They are likely to run out of gas and star formation, merge, and become ellipticals. That is their fate.”

    Step up to the bar

    Not all is symmetrical in the realm of spiral galaxies. About two-thirds of all disks sport “bars”—elongated concentrations of stars embracing the galactic cores. Our Milky Way has one: a stubby bar first suspected in the 1980s and recently mapped by laborious census of a distinctive class of stars within the disk.

    Looking back.

    A journey far above the Milky Way's disk might reveal this view of its spiral arms and central bar.


    Bars can alter disk galaxies by redistributing mass and angular momentum. “Any kind of perturbation in a cold disk tends to form bars or spiral arms,” says astronomer Shardha Jogee of the University of Texas, Austin. Once formed, a bar tugs gravitationally on gas and pulls it toward the center of the galaxy, triggering the birth of new stars. In theory, this may sow the seeds of a bar's destruction. Some early simulations showed that a central buildup of mass propels passing stars farther out onto great looping paths, dissolving the bar and its narrowly confined stellar orbits.


    In 12 billion years of simulated evolution, a galaxy morphs from chaotic blob (above) to flat disk (bottom right).


    But more recently, astronomers have wondered how quickly these transitions might happen. “The evolution from barred to unbarred and back again can go on in the lifetime of a galaxy, but there has always been a lot of question about how fast this process is,” says astronomer Mousumi Das of the University of Maryland, College Park. GEMS points to a slower transformation than expected. The team, led by Jogee, found a constant ratio of strongly barred to unbarred galaxies at all epochs. The structures survive at least 2 billion years, if not much longer, the authors concluded in the 10 November Astrophysical Journal Letters.

    Another valuable tracer of a galaxy's history is its so-called thick disk, a smattering of older stars that wander above and below the main disk. Astronomers aren't yet sure how stars in the Milky Way's thick disk got there. In one popular scenario, a galaxy merger harassed the stars out of their cozy orbits in the thin disk, perhaps 10 billion years ago. Because there are no stars younger than that in the thick disk, that event probably was the galaxy's last noteworthy consolidation, says astronomer Rosemary Wyse of Johns Hopkins University in Baltimore, Maryland.

    However, Julio Navarro and his colleagues think they see imprints of a more fascinating tale. Scrutiny of the motions and chemical compositions of stars in the thick disk reveal a few odd groupings that have properties dissimilar to those of the rest of the galaxy. The team proposes, provocatively, that the thick disk is not a puffed-up set of the Milky Way's own stars but is shot through with aliens. Arcturus, a bright star not far from the sun, could be one such immigrant from a long-ago devoured galaxy.

    The next step for astronomers involved in this galactic archaeology will be a thorough charting of the motions of millions of Milky Way stars all around us. One such effort, the Radial Velocity Experiment, is under way at the 1.2-meter U.K. Schmidt Telescope in Siding Spring, Australia. And a proposed extension of the U.S.-led Sloan Digital Sky Survey would examine stars in the galaxy's crowded plane, a region the survey has largely avoided.

    Starting in 2011, the European Space Agency's Gaia satellite will scrutinize a billion stars, fully 1% of the galaxy's population. We may then learn how our familiar disk has kept itself together in a universe full of disorderly influences.

  16. Disks of Destruction

    1. Robert Irion

    Exploding white dwarfs are a key yardstick of the cosmos, but how does gas spiraling onto their surfaces make these stellar corpses blow up?

    Most white dwarfs live gently and disappear silently. The remains of stars like our sun, white dwarfs usually cool down for billions of years and fade into black cinders. But some of these stars refuse to go quietly. Aided by matter stolen from other stars, they explode in planet-sized fusion bombs that can outshine entire galaxies.

    These blasts, called type Ia supernovas, became famous in 1998 when astronomers used them as flashbulbs of standard brightness to show that the cosmos is expanding at an accelerating rate. Newer studies have validated that startling claim (Science, 19 December 2003, p. 2038). Today, the supernovas are central in efforts to decipher “dark energy,” the bizarre force driving the universe's hastening growth.

    Yet tough puzzles confound the cottage industry that now studies type Ia supernovas. Astronomers don't yet know how a white dwarf gains enough mass to seal its doom. In the most popular explanation, the dwarf slowly strips gas from a nearby swollen companion star approaching the end of its life; the stolen gas forms an accretion disk that ultimately sparks the white dwarf's spectacular demise. An alternative explanation sees the donor as another white dwarf spiraling toward a messy crash. What's more, theorists disagree deeply about how a dwarf actually blows up.

    With astronomers planning to expand their catalog of type Ia supernovas by thousands and increase their use as cosmological measures, these questions are unsettling. “If you don't understand type Ias, do you trust using them for cosmic evolution?” asks astrophysicist David Branch of the University of Oklahoma, Norman. “Everyone would feel better if we had some understanding of the tools we're using.”

    Shattered to smithereens

    Ordinary low-mass stars create white dwarfs when their central nuclear fires burn out. Gas in the core gets crushed to an Earth-sized ball until electrons—pushed to ever-higher orbital energies—resist further squeezing. But if by some method a greedy dwarf attracts extra gas and its mass approaches the magic “Chandrasekhar limit” of 1.44 times the mass of our sun, its peaceful retirement is over.

    Researchers at least see eye to eye on the basics of what happens next. “Everyone agrees that type Ia supernovas are thermo-nuclear disruptions of mass-accreting white dwarfs,” says astronomer Mario Livio of the Space Telescope Science Institute (STScI) in Baltimore, Maryland. Close to the critical mass, the increased pressure ignites a chain reaction in the core. New fusion rips the dwarf apart within seconds, Livio says: “The whole thing is shattered to smithereens. There is nothing left behind.”

    Costly meal.

    If a white dwarf (upper right) steals enough gas from a companion star via an accretion disk, the dwarf may explode in a type Ia supernova.


    The debris propelled out by each blast yields the only visible clues. Telescopes detect the spectral signatures of the heavy elements iron and nickel, as well as silicon, calcium, and magnesium. The patterns—especially the amount of radioactive nickel, which powers the supernova's light display—match what astrophysicists expect from the sudden combustion of a white dwarf made mostly of carbon and oxygen.

    The first puzzle for astronomers is the origin of those fatal dollops of extra matter. The favored source is a full-fledged star: a binary companion to the white dwarf. When this star's core starts to run out of nuclear fuel, it burns hotter and puffs up its outer atmosphere. If the white dwarf orbits closely enough, it can swipe this loosely bound gas. Gravity pulls the gas into an accretion disk that dribbles matter onto the white dwarf's surface, and the stellar thief heats up with new life.

    But the transfer must happen at just the right rate: roughly one 10-millionth of a solar mass per year, or 1/30 of Earth's mass. Any higher or lower than this, theory suggests, and thermonuclear flashes on the white dwarf's surface or vigorous winds can expel much of the matter gained from the accretion disk. This Goldilocks requirement sounds frightfully specific, but astronomers think they see white dwarfs accreting in binary systems at something like the right rate. “There appear to be enough of these to get the right kind of statistics for type Ia supernovas, roughly 3 per 1000 years in the Milky Way,” Livio says.

    Hot on the trail

    Recent research has bolstered this scenario. Astronomers identified a star they believe supplied the matter to trigger the most recent type Ia event known in our galaxy: Tycho Brahe's supernova from 1572. A team led by Pilar Ruiz-Lapuente of the University of Barcelona, Spain, found the star dashing through the sizzling supernova remnant three times faster than neighboring stars—just as if it had been set free from a high-velocity binary orbit. In the 28 October issue of Nature, the team reported that the star looks like an aged version of our sun but puffed up, as if by a recent blast wave.

    Another study took a different tack to reach a similar conclusion about donor stars. STScI astronomer Louis-Gregory Strolger and colleagues examined 42 distant type Ia supernovas, between 2.4 billion and 9.5 billion light-years away. The team compared how many explosions popped off in different epochs of the universe's history with the times that stars formed. In the 20 September Astrophysical Journal, the astronomers concluded that there is an average time delay of 4 billion years for new star systems to start spawning type Ia supernovas. That's about right for one member of a binary pair to expand at the end of adulthood and lose gas to a compact companion, Strolger says.

    But one very different idea lingers stubbornly. Some astronomers hold that type Ia supernovas result from the ruinous mergers of two white dwarfs. Longtime proponent Icko Iben of the University of Illinois, Urbana-Champaign, points to differences from one supernova to the next as a natural outcome of mergers between dwarfs of varying masses. Moreover, type Ia supernovas in ancient elliptical galaxies—where no new stars have formed for billions of years—are best explained by binary pairs of old dwarfs that gradually spiral together, Iben believes.

    Most damning of all, in Iben's view, is the missing hydrogen. Bloated stars that donate mass to a white dwarf should pollute the whole environment with hydrogen, he says. And yet astronomers have seen evidence of hydrogen in just one type Ia supernova, in 2002. It's totally lacking in the rest, but crashes of hydrogen-free white dwarfs explain that logically. “To me that is a very powerful argument,” Iben says.

    For years, people dismissed this idea by noting that no suitable merger candidates were seen in the Milky Way. But a recent survey by the European Southern Observatory's Very Large Telescope at Cerro Paranal in Chile changed that. Astronomers observed more than 1000 white dwarfs and discovered one massive binary pair that will merge in about 2 billion years, says survey leader Ralf Napiwotzki of the University of Leicester, U.K.

    One prominent team is dubious that such pairs will explode. Astrophysicists Ken'ichi Nomoto of the University of Tokyo and Hideyuki Saio of Tohoku University in Japan verified earlier calculations that the lighter of two merging white dwarfs would break up completely and form a thick accretion disk around the more massive one. As this material streams onto the dominant dwarf, carbon within the heated star burns into oxygen, magnesium, and neon, Nomoto says. Once the composition changes, a sudden flash of fusion can no longer occur, and the dwarf collapses “peacefully” into a neutron star. But other theorists maintain that synchronized rotation between the two dwarfs would force a different style of accretion, triggering an all-consuming explosion.

    Got a match?

    You might think it would be easy to explain how a single white dwarf blows up, but the story is anything but neat. Theorists know the nuclear physics well, says astrophysicist Alexei Khokhlov of the University of Chicago, Illinois. “But we don't know the initial conditions, and the simulations are very complex,” he says.

    Theorists concur that the event must start at or near the dwarf's core, when increased pressure from the added matter sparks runaway fusion among carbon and oxygen atoms. But even this initial step causes angst. “The most crucial question in these detailed models is how the flame ignites, but there's a lot of hand waving involved,” says astrophysicist Wolfgang Hillebrandt of the Max Planck Institute for Astrophysics in Garching, Germany.

    Hillebrandt's colleague Friedrich Röpke envisions “a central ignition with a foamy structure. One expects strong convection before the ignition, so it should ignite in several spots distributed around the center.” This wave of burning, called deflagration, spreads erratically through the star's interior at subsonic speeds as the material convects, leaving unburned pockets of carbon and oxygen.

    At this point, theorists diverge. The German group thinks deflagration can explain the entire supernova, blowing it apart in about 2 seconds. But others point out that such an explosion would produce debris that is too rich in unburned light elements and too “mixed” from the inside out. Spectra from real type Ia supernovas suggest a more layered explosion, says Oklahoma's Branch.

    To better match those data, simulations by other teams invoke “delayed detonation.” Parts of the dwarf burn with a deflagration flame, and then the whole thing explodes with supersonic shock waves. The shocks forge heavy elements throughout the rest of the dwarf in tenths of a second. Indeed, if deflagration is like a wind-driven fire that torches some trees and skips others, detonation is a bomb that incinerates the whole forest. The problem is that no one knows how to set the bomb off. “There is no known physical mechanism to convert deflagration to a supersonic detonation,” says Röpke.

    Sudden death.

    Erratic waves of explosive burning can rip apart a white dwarf in 2 seconds, according to this simulation.


    One possible solution recently startled the field. Theorists led by Tomasz Plewa of the University of Chicago set off a simulated detonation with a supersonic bubble that rose from the dwarf's interior, rather like a jellyfish. When the bubble broke the surface, it unleashed waves that raced around the white dwarf—confined by gravity—and collided on the far side. The smashup was violent enough to incinerate the star in an asymmetric blast. No one quite knows what to make of the weird sequence of events, published in the 1 September Astrophysical Journal Letters.

    Observers watch these efforts with anticipation and frustration. “We need a guide from theorists to understand the differences in these objects, and I'm not sure whether theory is up to it,” says astronomer Adam Riess of STScI. At one recent talk, Riess says, he counted 15 “knobs” that the theorist turned to adjust his model. “That's a huge number of ways to produce a supernova,” he says. “I found that a little distressing.”

    Keep the faith, says Khokhlov: “We have great hope that the explosion mechanism is somehow channeled through a very narrow evolutionary bottleneck,” with a set of unique solutions that theorists will unveil. Many hope that will happen soon, because without it there is an element of doubt when astronomers claim to divine the history and fate of the entire universe.

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution