News this Week

Science  09 Sep 2005:
Vol. 309, Issue 5741, pp. 1656

    Scientists' Fears Come True as Hurricane Floods New Orleans

    1. John Travis*
    1. With reporting by Carolyn Gramling, Jocelyn Kaiser, Eli Kintisch, and Erik Stokstad.

    There are times when scientists would prefer to be wrong. Such was the case last week as Ivor van Heerden and other researchers reflected upon the devastation that Hurricane Katrina wrought on New Orleans and the Gulf Coast towns to the east. As director of Louisiana State University's Center for Public Health Impacts of Hurricanes, Van Heerden has since 2002 led a multidisciplinary team looking at what would happen if a major hurricane directly hit New Orleans. The center has studied everything from how the city would flood to how many people might ignore evacuation orders or be unable to flee—almost 1 in 4, they had estimated. “The sad part is that we called this 100%,” says Van Heerden.

    Causing the largest natural disaster in U.S. history, Katrina slammed into the Gulf Coast on 29 August with its eye hitting about 55 km east of the city. Although the storm initially brought more destruction to other areas along the Mississippi and Louisiana coast, several levees protecting New Orleans failed the following the day, and the city, about 80% of which is below sea level, filled with water. The floods may have killed thousands, stranded many more, and triggered a massive relief and evacuation effort.

    Numerous studies had warned of this catastrophic scenario, and as it played out, many scientists watched with anger and frustration. “It's easy to do studies. Sometimes it's hard to act upon them,” says Rick Leuttich of the University of North Carolina, Chapel Hill, who has helped model how a hurricane could flood New Orleans. “We've had plenty of knowledge to know this was a disaster waiting to happen.”

    Katrina's wrath.

    These satellite pictures of New Orleans taken before (left) and after (middle and right) Hurricane Katrina give a sense of the flooding caused by breaks in the levees holding back Lake Pontchartrain in the north and the Mississippi River.


    In one sense, Katrina, which left many researchers without homes and laboratories (see sidebar, p. 1657), was a rarity: Few hurricanes that powerful have struck the Gulf Coast in recorded history. At the same time, say hurricane experts, the storm contained few surprises. After speeding across south Florida as a category 1 hurricane, it reached the Gulf of Mexico and began converting energy from the warm, moist air into increased intensity. By Saturday, 27 August, Katrina was a category 3 storm—and still growing.

    Timothy Olander, a tropical cyclone researcher at the University of Wisconsin, Madison, recalls waking up the next morning to see that Katrina's central air pressure had dropped from about 960 millibars to below 905 millibars. The storm was now a category 5 hurricane with winds topping 175 mph. “I thought, 'Holy cow. That's an amazing development.' You don't see that rapid intensification very often,” he says. Katrina “became one of the strongest storms ever recorded in the Gulf of Mexico-Caribbean region.” Two factors, says Olander's colleague James Kossin, fueled Katrina's growth: “phenomenally warm” waters in the gulf and a lack of strong, high-altitude winds that could have dispersed the storm's energy.

    On Sunday morning, 28 August, thousands in New Orleans failed to pay heed to an evacuation order or couldn't leave. Although that shocked many, Van Heerden's center had recently polled 1000 randomly chosen New Orleans residents, using social workers to reach poor people, and had found that 21.4% would stay despite an order to leave, many of them because they lacked the means to escape.

    Just before landfall, Katrina took a jog to the east, sparing New Orleans from the full force of the storm. Because of the way spinning storms interact with land, “hurricanes often wobble to the right as they come ashore,” says meteorologist Hugh Willoughby of Florida International University (FIU) in Miami.

    By landfall, Katrina had also shrunk to a category 4 storm. Scientists have a poor understanding of what regulates hurricane intensity, but Kossin and Willoughby note that some data indicate Katrina weakened because it had just undergone a phenomenon called eyewall replacement. The eyewall is the band of intense wind and clouds that forms around the hurricane's eye. Large storms sometimes develop an outer eyewall that starves the inner one of energy until it degrades.

    Katrina's wobble and weakening seemed at first to prevent what many have called the New Orleans “nightmare scenario.” The city's main threat from hurricanes is a storm surge, the wall of water pushed onto land as the hurricane comes ashore. This surge can rise 8 meters or more as the water goes from deep water into shallow areas and then onto land. Because Atlantic hurricanes spin counter-clockwise, the surge tends to be highest on their east side as the winds help any water moving north.

    Because much of the city is below sea level, New Orleans is particularly vulnerable to a storm surge moving through the gulf and into Lake Pontchartrain. Over the past few decades, several computer models have shown how strong hurricanes on the right track could cause massive “overtopping” of the levees that, averaging almost 5 meters high, keep the lake from the city. The National Oceanic and Atmospheric Administration's (NOAA's) official storm surge model SLOSH (Sea, Lake, and Overland Surges from Hurricanes) was developed in the late 1960s, and Leuttich and several collaborators have created a more sophisticated model called ADCIRC (Advanced Circulation) that has been adopted by the Army Corps of Engineers and other groups. Last year, in an exercise simulating a direct hit by a slow-moving category 3 hurricane, both models showed that the levees would not prevent the flooding of New Orleans.

    In hot water.

    As Katrina traveled through the Gulf of Mexico, unusually warm waters strengthened it into a monster hurricane.


    According to these models, Katrina's storm surge should not have submerged the city. Joannes Westerink of the University of Notre Dame in Indiana, who helped develop ADCIRC, says it estimated that the southern shores of Lake Pontchartrain only rose about 3 meters during Katrina. (The various models estimate that the Mississippi coast received a peak storm surge of about 7 to 9 meters, which would be the highest in U.S. history.)

    Instead of overtopping, the catastrophic collapse of several levees—ones that had been upgraded with a thick concrete wall— apparently sealed the city's fate. Stephen Leatherman, director of FIU's hurricane research center, suggests that the lake's raised levels may have increased water pressure to the point that water flowed through the earthen levees on which the concrete walls sat. “Then the whole thing collapses. This is how an earthen dam collapses during a flood,” he says.

    The devastation from Katrina may reignite interest in bolstering the wetlands south of New Orleans to provide more of a hurricane barrier. As a storm passes over, wetlands and barrier islands along the coast sap its energy and reduce storm surge. By some estimates, however, up to 100 square kilometers of this buffer disappear each year, largely because the Mississippi River has been leveed and dammed so much that it deposits much less sediment onto the delta. (Katrina wiped out barrier islands herself.)

    In 1998, a collection of state and federal agencies, including the Environmental Protection Agency and the Army Corps of Engineers, proposed Coast 2050, a $14 billion strategy to restore Louisiana's wetlands (Science, 15 September 2000, p. 1860). But the project never won federal funding and hasn't moved beyond the planning stage.

    Any renewed debate over coastal restoration will likely have to await the long cleanup and recovery of New Orleans and the surrounding areas, a process that will take months, if not years. Among the research institutions bearing the brunt of Katrina were Tulane University, the University of New Orleans, and Xavier, all of which lost power to most of their campuses. The three universities have canceled their fall semesters and are still surveying damages.

    The National Institutes of Health (NIH) has begun working with the medical schools of Louisiana State University and Tulane in New Orleans, which together have about 300 NIH grants totaling $130 million annually. NIH is trying to arrange for temporary lab space at other universities for research teams displaced by the flooding. “It's clear that the time to return to facilities that will be functional is undefined at this point,” said NIH Director Elias Zerhouni. “We need to make provisions to continue their research and be able to support them during this interim period.” NIH was also working to get generators and other support to help New Orleans-area researchers keep lab animals alive and samples cold.

    Farther from the floodwaters, the picture is less bleak. The deluge from Lake Pontchartrain never reached Tulane's National Primate Research Center (TNPRC), situated on higher ground on the opposite side of the lake. Still, Katrina's winds caused extensive damage to the facilities, says Tom Gordon of Yerkes National Primate Research Center in Atlanta, Georgia, who was contacted by one of the TNPRC employees who stayed behind to care for the animals. “In the last days, [the center] took some extraordinary precautionary measures to do what they could,” Gordon says.

    Deadly surge.

    A computer model of a fictional category 3 hurricane shows how a storm surge moves from the gulf into Lake Pontchartrain and floods the New Orleans area.


    Also weathering the storm were several major research facilities, including the Laser Interferometer Gravitational Wave Observatory at Livingston, Louisiana, and NASA's Stennis Space Center in Mississippi. Lockheed Martin's Michoud Assembly Facility in New Orleans, which builds the external fuel tanks for the space shuttle, suffered no injuries and only minor wind and roof damage, says June Malone, a spokesperson for NASA's Marshall Space Flight Center in Huntsville, Alabama.

    The gulf itself may sustain damage from Katrina. Data are still coming in to NOAA, but two large oil spills from coastal storage facilities have been identified. “We would expect environmental impacts,” says Tom Callahan of NOAA's Hazardous Materials Response division.

    Katrina scuttled at least one major scientific meeting. The Interscience Conference on Antimicrobial Agents and Chemotherapy was scheduled for New Orleans later this month but will now take place in Washington, D.C., in December. Ironically, New Orleans was also going to host two major hurricane research conferences in the spring.

    Even as the New Orleans region and its research institutions struggle to recover from Katrina, many are casting nervous eyes to the gulf. Hurricane season is far from over this year, and researchers say that the United States has entered a period that is likely to bring more major hurricane strikes. “That's scary,” says Olander. “Who knows what else is on the way?”


    Riding Out the Storm

    1. Carolyn Gramling

    Immunologist Seth Pincus survived Hurricane Katrina, but much of his research may not. Evacuated from Louisiana State University Children's Hospital in New Orleans on Thursday, Pincus left hundreds of fragile blood and tissue samples—representing years of HIV and other infectious disease research—to an uncertain fate.

    Pincus, 57, studies the interaction of antibodies and pathogens and directs the hospital's Research Institute for Children. Throughout the storm, he and several hundred other hospital employees stayed to look after 100 remaining patients, as well as research samples belonging to him and colleagues. “We probably held out the longest,” Pincus says. “A lot of people in New Orleans wound up abandoning their work. I think every scientist there was worried about what's more important—my experiments or my life.”

    The low point came 2 days after the hurricane, Pincus says. The staff realized that the lack of clean water, combined with fears of looters, posed a health risk that would force them to abandon the hospital—and the hundreds of research mice and rats that they had managed to save. Rather than let the animals starve, dehydrate, or overheat, Pincus euthanized them with pentobarbital. Then he packed what he could into insulated containers, hoping to keep cell lines and microbial collections cold until they could be transported to Baton Rouge. “Everything I own and do is [normally] in the -80°C freezer and liquid-nitrogen tanks,” Pincus says.


    In the end, the staff didn't want to wait for the planned afternoon exit convoy and began to leave hours ahead of schedule. “It was so hectic and crazy,” Pincus says. “We had to leave probably the most important specimens.” Samples packed for the trip, but abandoned, may last for a week, he says. The freezer was still running on generator power when Pincus left—but will automatically shut off unless the New Orleans SWAT team using the building as a command center keeps it running.

    Pincus plans to settle in at a temporary base for the Children's Hospital set up in Baton Rouge. Although the National Institutes of Health has extended grant deadlines for flood victims, he wonders how New Orleans researchers will stay competitive, with delays of months and the loss of research samples and animal colonies. Some colleagues, he says, may choose to go elsewhere. “That's the big concern for New Orleans: If we can't get back up and going within 2 to 3 months, anyone who can go anywhere else will.”

    Pincus tries to remain hopeful. “I may have to start all over again,” he says. “But maybe this is an opportunity to take some novel approaches. In some ways, it may even be liberating.”


    At Last, a Supportive Parent for Saturn's Youngest Ring

    1. Richard A. Kerr

    Imperceptible, nearly undetectable, the ephemeral E ring of Saturn shouldn't be there at all. Its icy, micrometer-size particles can't possibly survive more than a few centuries in their orbit far beyond the main rings, so something must be replenishing the E ring. For 25 years, planetary scientists have looked to Saturn's moon Enceladus for an explanation; perhaps icy volcanoes on the inscrutable satellite are the source. Last July, the Cassini spacecraft did detect volcanolike activity on the 500-kilometer satellite, only to find that no particles were being ejected. Nothing but water vapor was shooting off the newly found hot spot near the south pole (Science, 5 August, p. 859).

    Now, with further analysis, Cassini scientists have decided that what may be geyserlike venting at the hot spot is in fact launching ice particles into the enveloping E ring, and in sufficient quantities to sustain it. “Before, we were puzzled,” says Cassini team member Larry W. Esposito of the University of Colorado, Boulder. “Now we have a plausible story.”

    An orphan no more.

    Geysers on Enceladus replenish the E ring (here in false color from the Hubble Space Telescope).


    At first, July's close flyby of Enceladus seemed to support a relatively mundane source for the E ring, says Frank Spahn of the University of Potsdam, Germany, a member of Cassini's Cosmic Dust Analyzer team. As Cassini passed within 175 kilometers of the moon, the number of particles detected by the dust analyzer rose and declined smoothly. That's just what computer models suggested Cassini would see if the steady rain of micrometeorites that strikes Enceladus erodes E ring particles from the entire surface of a geologically dead moon. But Cassini instruments also detected a relatively hot spot (a cool 85 K) near the south pole, where particularly warm gouges in the surface were giving off water vapor. The hot spot caught Spahn's attention, as did the way the sharp peak in venting water came a minute later than the broad peak in dust hits. So Spahn and his colleagues went back to their models.

    With further work, the Potsdam group found that only a small, localized dust source could produce a broad peak in dust detections as well as the 1-minute difference between dust and water peaks. “We're astonished, [but] there's no other way,” says Spahn. The particles “have to come from the south pole.”

    Using Cassini ultraviolet observations of a star passing behind the south pole emissions, Esposito and his team members also find that if even a small proportion of the water vented by Enceladus is in the form of ice particles, there would be enough to sustain the E ring. There would also be enough particles to account for last year's E-ring outburst of atomic oxygen observed by Cassini (Science, 14 January, p. 202), he says. He now postulates that either geyserlike activity on Enceladus surged then or that Saturn's magnetosphere accelerated its charged-particle erosion of E-ring particles.

    The 25-year search for the E ring's source seems to be over, but that only shifts attention to the next mystery: What is Enceladus doing with a warm south pole in the first place? It is too small and old to generate enough heat by radioactive decay. Jupiter's fiery Io heats up through gravitational interactions with other moons, but Enceladus is not in one of the usual orbital relationships with another satellite that would allow tidal energy to be pumped in. One thing's for sure, though: The ball is now in the orbital dynamicists' court.


    Journal Plans Faster Thumbs-Down for Rejected Papers

    1. Adrian Cho

    A good editor can quickly sift the wheat from the chaff in a pile of physics papers. At least that's what the editors of the leading physics journal, Physical Review Letters (PRL), have decided. Last month, they announced they will increase the fraction of submitted papers they reject without peer review from between 10% and 15% to between 20% and 25%. The move should ease the burden on referees, speed up the vetting process, and help PRL compete for papers with Science and Nature.

    PRL has always summarily rejected papers that are obviously wrong, says editor Jack Sandweiss of Yale University in New Haven, Connecticut. Editors have quickly rejected another 5% to 10% of submissions because they clearly are not important and accessible to a broad range of physicists, as PRL papers are supposed to be. Internal studies show, however, that editors can predict which papers referees will ultimately reject on the same grounds, Sandweiss says, so editors will now make that call more often. “We're not changing the criteria for Physical Review Letters,” he says. “We're just applying them earlier.”

    The new policy should help PRL cope with a rising tide of manuscripts. Editors expect more than 10,000 submissions this year, twice as many as they received 12 years ago. PRL accepts about 35% of submissions, and the median time to acceptance is more than 4 months, says editor Stanley Brown from his office in Ridge, New York.

    Physicists are cautiously optimistic about the new approach. If a paper is going to be rejected, it's better to find out sooner so it can be submitted to another journal more quickly, says Raffi Budakian of the University of Illinois, Urbana-Champaign. Speeding up the selection process could help PRL vie for high-profile papers with Science and Nature, which reject most submissions without peer review and accept papers more quickly. Speed of review is “a big factor” in deciding which journal to submit a paper to, says J.C. Séamus Davis of Cornell University in Ithaca, New York: “Sometimes you need to get the results out there in a few weeks.”

    Authors can appeal a rejection. But an editorial on the PRL Web site ( encourages authors to send rejected papers to more appropriate journals. In January, the Journal of the American Chemical Society announced a similar change in policy.


    Supercomputer, X-ray Source Win Backing in Japanese Budget

    1. Dennis Normile

    TOKYO—Japanese scientists hope that the government will ante up $67 million in 2006 to begin building a next-generation supercomputer and an advanced x-ray source. Last week, their hopes were raised when the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) included the projects in its annual budget submission (Science, 2 September, p. 1473). But they can't claim the pot until the end of a new science budget game.

    Just a few years ago, the ministry's endorsement would have been sufficient to win funding. But now a revamped Council for Science and Technology Policy, nominally headed by the prime minister, joins the game with a list of government-wide priorities and proposals ranked according to those priorities as well as a project's scientific merit. And with the central government hoping to rein in public spending, the rankings are becoming even more important.

    One council priority is strengthening the country's scientific infrastructure. (Another is the development of human resources, for which the ministry responded by expanding grants for young researchers.) And that's where both big projects shine. The proposed 10 petaflops (10 quadrillion calculations per second) supercomputer would be several times faster than any of today's machines. The ministry is asking for $37 million as a down payment on a $1 billion, 7-year project. “Computer simulation has become exceedingly important in manufacturing, nanotechnology, life sciences, earth sciences, astronomy, and other fields,” says Noriyuki Matsukawa, director of the information division at MEXT.

    There's also the matter of national pride. The Earth Simulator was the world's fastest when it debuted in 2002 (Science, 1 March 2002, p. 1631), a title now held by IBM's Blue Gene/L machine at Lawrence Livermore National Laboratory. “There is strong support for trying to reclaim this gold medal,” says Akira Yoshikawa, director of MEXT's science and technology policy bureau.

    Making room.

    RIKEN's Harima Institute hopes to add a free-electron laser to its Spring-8 synchrotron.


    The x-ray source will contribute to both fundamental and applied research in life sciences and nanotechnology, says Yasunori Kojima, director of the ministry's office of synchrotron radiation research. The next step beyond synchrotrons, which produce x-rays by sending electrons in a wide circle, is the so-called free-electron laser (FEL), which generates x-rays by manipulating an electron beam traveling in a straight line. Kojima says x-ray FELs promise to resolve thorny protein membrane structures and capture images of chemical reactions as they happen. The ministry is asking for $30 million to begin building the 5-year, $365 million facility at RIKEN's Harima Institute in Kobe.

    Both projects are expected to keep Japan globally competitive. U.S. researchers are also laying plans for a petaflops supercomputer, and both Germany's synchrotron radiation laboratory (DESY) and Stanford University's Stanford Linear Accelerator Center (SLAC) are planning their own FELs. “If we don't get funding next year, SLAC and DESY will be advancing in this field, and Japan will be retreating in a big way,” says Hideo Kitamura, a RIKEN physicist who leads one of the groups working on the x-ray FEL.

    Ministry officials feel good about the funding prospects for the supercomputer and x-ray FEL projects, says Yoshikawa, who admits that other portions of the ministry's requested 9.5% increase in science-related spending, to $8.3 billion, are vulnerable. Last year, MEXT obtained a 1% increase in its science budget while science spending shrank by 2% at the Ministry of Economy, Trade, and Industry and by a whopping 22% for defense-related research. “We think there is recognition that academic research produces the seeds for industrial competitiveness,” Yoshikawa says. But every year scientists have to prove it.


    Are Human Brains Still Evolving? Brain Genes Show Signs of Selection

    1. Michael Balter

    We humans are proud of our big brains, and rightly so. Averaging 1350 cubic centimeters (cc), the human brain is proportionally larger than that of any other animal. Its highly advanced cognitive powers have spurred us to create art, build cities, and send representatives of our species into space. Just why natural selection blessed us with these talents is poorly understood. But the fossil record and genetic studies clearly show that the evolution of higher cognition began sometime after the chimp and human lines split, some 5 million to 6 million years ago, and continued at least until the rise of modern humans, roughly 200,000 years ago.

    Now two new reports on pages 1717 and 1720 of this issue suggest that the evolution of the human brain may not have stopped when Homo sapiens first came on the scene. The studies, both led by human geneticist Bruce Lahn of the University of Chicago, conclude that two genes thought to regulate brain growth have continued to evolve under natural selection until very recently—and perhaps are doing so today.

    “The possibility that our brains are continuing to adapt is fascinating and important,” says Huntington Willard, director of the Institute for Genome Sciences and Policy at Duke University in Durham, North Carolina. “Most laypeople tend to assume that humans are the pinnacle of evolution and that we have stopped evolving.”


    When mutated, brain genes cause microcephaly in humans (normal infant brain, left; microcephalic brain, right).


    But researchers caution that although these genetic variants, or alleles, do seem to have been the target of natural selection, there's as yet little solid evidence that the advantage they confer was brain-related. “The case for selection acting on [the genes] is reasonably strong,” says anthropologist Mark Stoneking of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. “However, there is absolutely nothing in either paper to relate the signature of selection to any brain-related phenotype.”

    Lahn's group focused on two genes, called microcephalin and ASPM, that cause primary microcephaly, a condition in which the brain is severely reduced in size. Earlier work by Lahn's group and others had shown that the human versions of microcephalin and ASPM have come under strong natural selection since the chimp-human split, implicating both genes in our ancestors' dramatic brain expansion.

    Several other genes have also been identified as potential contributors to our early ancestors' evolution (Science, 8 July, p. 234). On page 1693 of this issue, Ajit Varki of the University of California, San Diego, and his colleagues add another to the list: They report that a gene expressed in microglia, immune cells of the nervous system, produces a protein found only in humans. This suggests that it too has been the target of selection during human evolution, and that human microglia are specialized compared to those of chimps.

    In their new research, Lahn and his co-workers looked for evidence that selection had operated on microcephalin and ASPM much more recently—since the rise of modern humans. The team sequenced the DNA of about 90 human cells housed at the Coriell Institute for Medical Research in Camden, New Jersey, whose cell collection is broadly representative of global human diversity. For each gene, they found an allele with a surprisingly high frequency in human populations. Statistical tests showed that these frequencies are unlikely to be due to random genetic drift or population migration, suggesting that the alleles were instead favored by natural selection. Making assumptions about past mutation rates, the team then estimated when each allele arose. The favored microcephalin allele clocked in at 37,000 years ago (with confidence intervals ranging from 14,000 to 60,000 years)—about the time of the explosion of symbolic behavior in Europe. The ASPM allele arose 5800 years ago (with a possible range of 500 to 14,100 years), just before cities arose in the Near East.

    Lahn's team argues that in the case of ASPM in particular, the young age of the selected allele and its worldwide distribution suggest that it was subject to a strong “selective sweep” in the very recent past. Lahn says these alleles may have provided an adaptive advantage in some brain-related function, possibly although not necessarily cognition. His group is now collaborating with others to see if living people with the alleles have some sort of cognitive advantage. The team has also taken out patents on both genes, which will cover tests to see whether an individual carries the favored alleles.

    Big thinker?

    Certain forms of two brain genes may confer a selective advantage.


    Despite these potentially dramatic findings, many researchers who spoke to Science were unwilling to immediately accept all of the Lahn team's conclusions. The data do bear the fingerprints of natural selection, says geneticist Chris Tyler-Smith of the Sanger Institute near Cambridge, U.K. But he questions whether that selection was acting upon the brain or some other function. Both genes are expressed in tissues other than the brain, although previous studies have shown that their expression is strongest in the developing brain of mice and humans.

    Even if the favored alleles did provide some sort of cognitive or cultural advantage, some researchers say that it was unlikely to have been a dramatic one. All normal modern humans are capable of language and symbolic expression, regardless of which alleles they have. “This suggests that the new alleles don't have a big effect on these abilities,” says Tyler-Smith, who calls the possible links to events in human prehistory “highly speculative.”

    Lahn and colleagues also found a pronounced pattern in the distribution of the favored alleles in populations around the world: The microcephalin allel, for example, is much more common in Europe, Asia, and the Americas than in sub-Saharan Africa. Using a larger sample from 1184 individuals, the team found this allele in roughly 75% or more of Italians, Russians, and Han Chinese, and in nearly 100% of Colombians. In contrast, the allele had frequencies of less than 10% in the Zime of Cameroon and the San of Namibia, and about 30% of Tanzanian Masaai. The ASPM allele also showed a skewed geographic distribution.

    Lahn and his co-workers say that several scenarios could account for the pattern. For example, the favored alleles may have arisen outside Africa, or they may reflect a genetic “bottleneck” that occurred when a relatively small founding population carrying the alleles migrated out of Africa.

    The possibility that the favored alleles might confer some sort of cognitive edge—and that they are unevenly distributed in human populations—raises social and ethical issues, researchers say. Lahn warns that there is “a lot of potential for over- and misinterpretation” of his results. He points out that other advantageous alleles might have a very different population distribution. “You don't necessarily come out ahead” if you have these alleles, Lahn says: “We only picked out two.”

    Although they acknowledge such social concerns, most scientists who spoke to Science say that the only way to answer the questions posed by this research is to do more research. “We should treat these genes just like any others,” says Tyler-Smith.


    Panel Puts Eventual Chornobyl Death Toll in Thousands

    1. John Bohannon*
    1. John Bohannon is a writer in Berlin, Germany.

    VIENNA, AUSTRIA—A study released this week predicts that 4000 people or even more will die from cancers caused by the 1986 Chornobyl nuclear accident, a figure that dwarfs the 50 known deaths linked to the disaster so far. The report,* compiled by the Chernobyl Forum, a joint effort of eight United Nations agencies and the governments of Ukraine, Belarus, and Russia, also highlights the thousands who are suffering a variety of mental health problems since the accident.

    The meltdown of one of the reactors at the Chornobyl power plant in Ukraine on 26 April 1986 released approximately 50 tons of radioactive material into the atmosphere, contaminating an area inhabited by 5 million people. Because the most pernicious contamination was radioactive iodine-131, which lodges in the thyroid, most of the casualties are expected to succumb to thyroid cancer, which typically takes 25 years or more to show up.

    Over the 19 years since the accident, estimates of the final death toll from radiation-induced cancer have ranged from zero to tens of thousands. The panel of 100 scientists involved in the Chernobyl Forum reduced that uncertainty by reviewing all available data and discounting studies that were not sufficiently rigorous. “But that only considers the 600,000 people living in the most exposed areas. [The total] could double to 8000 if you also consider people around that area,” says forum member Fred Mettler, a radiologist at the University of New Mexico in Albuquerque.

    Radiation biologist Mikhail Balonov was part of the Soviet team rushed in to assess Chornobyl in 1986, and he says his team “also predicted 4000 deaths. But our conclusions were classified.” The forum's 600-page report, released by the International Atomic Energy Agency (IAEA) here on 5 September, also echoes initial predictions that the radiation will have no effect on fertility or the frequency of birth defects in the second generation. “Luckily, the exposure was too low for that,” says Balonov, who now heads IAEA's Radioactive Discharges Unit. Other effects of the radiation are either too subtle or have not yet been detected.

    Sleeping giant.

    A guard walks past the remains of Chornobyl's reactor #4, which is encased in a now-crumbling sarcophagus.


    The outlook for the environment around Chornobyl appears somewhat better. According to the report, 90% of the radioactive contamination was cleaned up through a massive removal of surface soils. Researchers are developing special salts and fertilizers to inhibit the remaining radioactive material in soil from getting into crop plants. But on the whole, the forum concludes, most of the originally exposed area is close to background levels of radiation.

    The report's most surprising conclusion is that mental health problems appear to be more common than any radiation-linked disease. The incidence of high anxiety is twice normal levels, and unexplained pain or debilitation is three to four times that in similar unexposed populations. One possible cause is the trauma experienced by the 350,000 residents who were forcibly relocated.

    Mettler, a member of the international scientific team that first visited the Chornobyl site in 1990, says another factor “is the psychological impact on people of not knowing the extent of contamination or the real health risks it poses.” That uncertainty, according to the report, seems to have translated into unhealthy lifestyle choices such as heavy smoking, drinking, drug use, and poor diet.

    Removing anxiety won't be easy, says Balonov. People in the Chornobyl area do not trust government officials, he notes, because “there was a tradition of lying” in Soviet times. Mettler hopes the Chernobyl Forum report will reassure residents. “It's a start,” he says.


    Dissecting a Hidden Breast Cancer Risk

    1. Jennifer Couzin*
    1. With reporting by former intern Cathy Tran.

    Many studies have shown that high breast density boosts a woman's chance of breast cancer. But it's not clear why, and doctors aren't sure how to translate the knowledge they've got to the clinic

    On a typical day, at least 150 women undergo mammography at the California Pacific Medical Center in San Francisco, and for the last year, most have agreed to take part in a giant experiment. Their mammograms are stripped of identifiers and transmitted across town to the University of California, San Francisco (UCSF). At both institutions, physicists, epidemiologists, and doctors are sifting through clues, trying to bring into the clinic one of the biggest—but relatively unheard of—risk factors for breast cancer: breast density.

    Roughly 50 studies have suggested that high breast density—in other words, relatively little fat and lots of connective tissue in the breast compared to other women in the same age group—boosts a woman's risk of breast cancer, and current estimates put the increase at four to six times. Only two other traits are known to increase risk more: age, or harboring a mutated version of the breast cancer susceptibility genes BRCA1 or BRCA2.

    Despite the consistent connection between breast density and cancer, fundamental biological questions remain. It's not entirely clear what dense tissue is made of, why it increases the likelihood of breast cancer, or how it modulates an individual woman's cancer risk. Nor do researchers know whether decreasing a woman's density will lower her chance of cancer. Even measuring density is problematic (see sidebar, p. 1665).

    “More and more people are realizing—we've got this strong risk factor, we need to understand it,” says Celia Byrne, a cancer epidemiologist at Georgetown University Medical Center in Washington, D.C.

    Faced with a powerful risk factor that comes with many unknowns, researchers and physicians are uncertain about when and how to apply their emerging understanding of breast density to patients. If integrated into clinical practice, density measures are likely to reshape the landscape of breast cancer risk assessment and prevention. The number of women with dense breasts is enormous. High-risk clinics “would be overwhelmed” if they were all referred there, says Frederick Margolin, a radiologist and director of the Breast Health Center at California Pacific Medical Center. He's working with UCSF scientists on the mammogram study, which is part of the broader San Franciso Mammography Registry examing risks and outcomes of breast cancer.

    But should women even be told their status, or would that provoke needless anxiety? Should they be offered extra screening, or prevention drugs, which carry their own hazards and whose effects on density are unclear? One physician estimates that if density were taken into account in risk assessment, up to 20% of postmenopausal women in the United States would be eligible for prevention drugs.

    Some scientists believe it's unethical to discuss density with patients until the risk factor is more fully dissected; others believe it's unethical not to. But the reality is that today, few physicians aware of density's risks share the information with their patients. “We don't know what to do with it,” concedes Margolin.

    On the frontlines.

    Toronto epidemiologist Norman Boyd is pursuing the unsolved mysteries of breast density.


    Density uncovered

    The breast density field was born in the mid-1970s, when, on a hunch, a Michigan radiologist named John Wolfe began outlining pools of dense tissue with a wax pencil on mammograms. He reported that women with denser breasts were more likely to contract breast cancer. The increase in risk Wolfe postulated from high breast density was staggering, as much as 20 times the risk faced by women with low density.

    “My initial reading was, this may be something Wolfe can see, but the rest of us probably can't,” says Norman Boyd, a cancer epidemiologist at Princess Margaret Hospital in downtown Toronto. Dressed trimly in a blue button-down shirt and tan slacks, Boyd's British accent has faded after four decades in Canada. He says his early skepticism dissipated when he asked ten radiologists to assess density on mammograms. The radiologists agreed more on density than any other mammographic quality, such as tumors.

    In the following years, Boyd and a small band of others helped establish that breast density increases the risk of breast cancer, though not as much as Wolfe predicted. The precise boost in risk, though, remains uncertain.

    From the beginning, density researchers faced skepticism on several fronts. One lingering concern was whether high density—which makes mammograms tough to read because dense tissue appears as roughly the same shade as tumors—was masking cancer rather than driving it. In 1995, in the Journal of the National Cancer Institute, Byrne and her colleagues published a study of 4000 women, 1880 of whom had developed breast cancer, that she thought would settle the question. Examining mammograms pulled from storage, Byrne's group found that some women didn't develop cancer until 14 years after high density had been detected. “This was pretty convincing evidence” that dense breasts weren't just masking tumors, she says. Still, some radiologists remain unconvinced that density can augment cancer risk.

    How, then, might dense tissue be triggering cancer? One of the few basic biologists exploring this is Thea Tlsty, a molecular pathologist at UCSF. She's collecting mastectomy specimens from healthy breasts, taken from women who died of other causes, such as car accidents.

    Tlsty and others have found that dense breast tissue contains two types of connective tissue, along with epithelial cells, which can form tumors. The connective tissues include collagen, a protein in skin and bone, and fibroblasts, another common connective tissue in the body.

    While epithelial cells are considered cells “at risk” in the cancer world, because they make up the bulk of most tumors, it's the connective tissues that have Tlsty intrigued. Just like tumors, dense breasts are “chock-full of collagen,” Tlsty says. More mysteriously, she is finding that collagen in high-density breasts is actually different from collagen in low-density breasts, turning over much more rapidly.

    Now Tlsty is trying to learn more about fibroblasts from her mastectomy samples. Although fibroblasts don't form tumors themselves, they can signal to cancers and help them grow. Tlsty wonders whether fibroblasts in dense breasts send the same kind of signals they send in the presence of a cancer.

    Density DNA

    Examining dense tissue, as Tlsty does, is one place to start. Another is in the genes. Three years ago, Boyd, his Australian colleague John Hopper from the University of Melbourne, and others reported in the New England Journal of Medicine that breast density was inherited among 950 pairs of identical and fraternal twins. They estimated that genetics accounts for roughly 60% of variations in density.

    Now the hope is that the hunt for genes behind breast density might also turn up genes that confer breast cancer risk: The same genes may have both effects. “I don't think anybody's looked at breast cancer genes in this particular way,” says Johanna Rommens, a geneticist at Toronto's Hospital for Sick Children. Just two genes, BRCA1 and BRCA2, are relatively common and known to significantly increase the chance of breast cancer, but many families with the disease don't carry these mutations.

    A fast-talking gene-hunter whose face lights up when she discusses DNA, Rommens helped discover the cystic fibrosis gene while a postdoctoral fellow. The photo of a smiling, gap-toothed 9-year-old girl is pasted above her desk, a survivor of a bone marrow failure disorder whose genetic culprit she also tracked down. But none of her previous searches, Rommens says, match her hunt for breast density genes in its complexity.

    After a couple years of lobbying, Rommens agreed to join Boyd of Princess Margaret across the street in an ambitious breast density genetics project. The pair, along with several colleagues, has applied for $4 million in funding and is planning a study of about 5000 individuals, most of them sister pairs who belong to a global breast cancer registry. (Not all the sisters have developed breast cancer.) The scientists will compare breast density on mammograms with their subjects' DNA.

    Looking for density genes is complicated, says Rommens, because density is only partly genetic and unlike disease, it's not a yes or no proposition. One woman might have 20% density, another 50%, and another 75%. But no patient has 50% of diabetes.

    At least two other teams are pursuing density genes: One led by cancer epidemiologists Celine Vachon at the Mayo Clinic in Rochester, Minnesota, and Thomas Sellers at the H. Lee Moffitt Cancer Center & Research Institute in Tampa, Florida; and the other by Douglas Easton at the University of Cambridge, U.K. Easton, who helped discover BRCA2, is also completing a density study in 500 BRCA carriers, a group whose connection with breast density, if one exists, remains unclear.

    Another looming question is how age and hormone levels influence the concentration of dense tissue. Nearly all of the density data collected so far is on women over 50. But a woman's density levels naturally dip with age, and researchers are wondering more and more whether it's her density at age 30 that helps determine her cancer risk at age 60. “You need to ask yourself, 'Is it where they started [with density] or where they ended up?'” says Sellers.

    It's questions like these that have some researchers particularly excited, because they could challenge classic thinking about breast cancer. Breast cancer gene studies, for example, says Easton, have tended to focus on the same pathways, the same hormonal patterns. “We need to break out of that paradigm,” he says.

    Already, some density experts are pushing work forward on a hormone that's been tentatively tied to breast cancer. Called insulin-like growth factor type 1, or IGF-1, it's produced in the liver and the breast's connective tissue. “IGF is the best clue” tying density and cancer together, says Steven Cummings, an internist and epidemiologist at UCSF, even if its precise connection to breast cancer is murky.

    Clinical conundrums

    Solving these questions, along with the measurement challenges, may open a Pandora's box for doctors and their patients. Not all radiologists are aware that density is associated with cancer, and even those who are may be reluctant to share density information with women for fear that it will provoke needless anxiety.

    Jack Cuzick, a cancer epidemiologist at the Wolfson Institute of Preventive Medicine in London, is incensed that women aren't told their breast density measures by radiologists who know it's a risk factor. It's an “ethical issue,” he says. So Cuzick is urging radiologists and surgeons to share breast density information with patients. To circumvent concerns about fostering anxiety, Cuzick is focusing his energies on physicians at U.K. screening clinics where women with suspicious mammograms are referred. These women, Cuzick reasons, are already worried. His efforts are meeting with mixed results. “Some [physicians] are prepared to do it, and some are saying they can't do it,” he says.

    Others can appreciate Cuzick's views, but their own are more nuanced. “There's a whole philosophical concern” about communicating with patients before important questions about density are answered, says Martin Yaffe, a medical biophysicist at Sunnybrook and Women's College Health Sciences Centre in Toronto. Tlsty is torn. She'd want her own density measures, she says, but isn't sure density is ready for widespread clinical use.

    Many physicians say they'd be more inclined to use density measures if they appeared in risk assessment models. Mitchell Gail, a biostatistician at the National Cancer Institute, is experimenting with adding density to the so-called Gail model, commonly used in the United States. The Gail model already includes such risk factors as age, family history, and age at first menstrual period. Gail is testing whether adding density makes it more precise. (Models don't always need to include all risk factors, especially those that travel together.) But, says Byrne, who's working with Gail, “Nothing's going to tell you yes or no absolutely” about whether you'll contract the disease.

    Cancer link?

    Like tumors, dense breast tissue (above) is packed with collagen, shown here in pink.


    The risk models also can't answer a critical question: Does lowering density lower breast cancer risk? Malcolm Pike, a cancer epidemiologist at the University of Southern California, argues that it does, given the drug tamoxifen's ability to lower breast density as well as breast cancer risk—or, for that matter, the tendency of hormone replacement therapy to increase density alongside cancer risk.

    But Boyd, Yaffe, and others are not so sure. “Just because two things move together doesn't mean that one thing causes the other,” says Yaffe. Testing the theory is difficult, though Boyd is trying with a diet study that will wrap up later this year. There, 4700 women are assigned to normal or low-fat diets for at least eight years. After two years, the low-fat diets reduced density slightly; Boyd doesn't yet know whether the diets also lower the chance of breast cancer.

    Given all these uncertainties, perhaps the biggest question for doctors and patients is what women with dense breasts can do about it. Extra screening is one obvious possibility—except that the most common and cheapest screening tool, mammography, isn't always well suited to catch tumors in dense tissue. (MRIs and ultrasounds can, however.) Boyd's evidence that diet can alter density hasn't been replicated. Another controversial option is preventive drugs: Tamoxifen has been shown to stave off breast cancer, and two other drugs, raloxifene and lasofoxifene, are being tested in large trials nearing completion. (Both are designed to treat and prevent osteoporosis.)

    But some oncologists are uncomfortable considering preventive drugs for women whose only breast cancer risk factor is high density. Numbers alone make density delicate to use in a chemoprevention setting. If breast density were treated like any other risk factor, that would make 15% to 20% of postmenopausal women eligible for preventive drugs, estimates Cummings, who has worked closely with Eli Lilly and Pfizer, the drugs' makers. It's also not clear whether prevention drugs act along the same pathway as density, says Boyd, and thus how effective they would be in women with high density.

    “We need to be very careful in saying, oh, you have 75% breast density,” says Carol Fabian, an oncologist who directs the breast cancer prevention program at the University of Kansas Medical Center. “You need to be on tamoxifen.”


    Fine-Tuning Breast Density Measures

    1. Jennifer Couzin

    In addition to unresolved questions about how density might spur cancer, another barrier that prevents clinicians from using density data to gauge cancer risk is the difficulty in measuring it. “We all agree on how much is a pound and how much is an inch, but there isn't necessarily a common scale for density,” says Celia Byrne of Georgetown University in Washington, D.C.

    Like Norman Boyd, a founder of the breast density field who's based at Princess Margaret Hospital in Toronto, Byrne measures density the old-fashioned way: by “drawing,” with some computer guidance, over digitized mammograms. The method is time-intensive and somewhat subjective. It has also fueled worries that despite their predictive power, density measures are crude and may vary in different hands.

    Shades of risk.

    Viewed on 3-D software, dense tissue in a high (62%) density breast glows red, while a breast with just 4% density is filled with cooler color tones.


    Researchers in Toronto and in San Francisco are independently trying to improve and automate density measures. They have developed software that measures density volume, instead of just its surface area; both versions are now being tested to see if they're more predictive of risk.

    In his office at Toronto's Sunnybrook and Women's College Health Sciences Centre, with stacks of papers balanced precariously on most surfaces, medical biophysicist Martin Yaffe demonstrates new software designed to measure density volume. On one breast, traditional 2-D measures reveal 7% density; the volumetric approach reveals 22%. Yaffe and Boyd are conducting further tests to validate the volumetric software, including in a study of 1000 postmenopausal Caucasian and Chinese women.

    At the same time, researchers at the University of California, San Francisco (UCSF), and California Pacific Medical Center are trying to automate density fully. They're testing their method on tens of thousands of volunteers undergoing mammograms at California Pacific. “We don't have an automated program yet that's as good as having a person look at [a mammogram],” says Byrne. The leaders of the California Pacific study—including UCSF epidemiologists Karla Kerlikowske and Steven Cummings, and physicist John Shepherd, who designed the software—are hoping for an answer within the next year or so.


    Deep Impact Finds a Flying Snowbank of a Comet

    1. Richard A. Kerr

    Blasting a comet for science's sake turns out to be harder than expected, but targeting Tempel 1 has revealed a fragile body that's already been through geologic turmoil

    Before comet researchers smashed a nearly half-ton chunk of metal into Tempel 1 last July, they weren't at all sure how a comet would react to such an assault (Science, 27 May, p. 1247). The possibilities ranged from imperceptible to apocalyptic. Perhaps it would swallow the impactor without a trace, like a marshmallow engulfing a BB. Perhaps it would form a fairly conventional impact crater. Or perhaps it would go completely to pieces. Just what kind of hole the impact left in the 14-kilometer-long chunk of ice and dirt should tell what this comet is like, Deep Impact team members said. Such groundbreaking exploration, they added, would no doubt surprise them.

    They got the surprise right. The researchers report online in Science this week* that the impact blew out so much unexpectedly fine powder that they still haven't seen the crater through all the dust. Indeed, they might well never extract the crater from the images returned by the Deep Impact mother craft.

    Destination Tempel 1.

    Smooth areas edged by lighter scarps (top and lower left) suggest that the comet has been geologically reworked.


    Even so, analysis of close-up observations of the surface and the way impact ejecta behaved shows that comets “are not cosmic candy fluff or chunks of concrete,” says team member Joseph Veverka of Cornell University in Ithaca, New York. This one at least is decidedly on the fluffy side, though, with powder-size particles weakly agglomerated into something with the consistency of a snowbank. And, much to everyone's surprise, Tempel 1 shows signs of past geological activity; it is not just a primordial dirty snowball.

    Several sorts of observations made the inferences possible. The spacecraft's infrared spectrometer measured varying surface temperatures under different illumination by the sun. The readings showed that the surface is quick to warm or cool, which means it must be porous like loose sand or granular snow, not a solid block of ice.


    The impactor threw up a surprising amount of powdery debris, so much that a rising dust plume cast a shadow (red arrows) more than 5 seconds after impact and obscured the resulting crater.


    What the mother ship could see of the impact and its aftermath also points to a weak, fine-grained comet. The spacecraft imaged an expanding cone of ejecta as close to the crater as it could view. The cone continued to rise for more than an hour, which means the outer meters of the comet must consist of weak, easily powdered material. The rate at which the ejecta cone expanded near its base depends on the strength of Tempel 1's gravity. From that dependence the team calculates that the comet as a whole has a density of roughly 0.6 grams per cubic centimeter, two-thirds that of pure water ice. Something like 50% to 70% of Tempel 1 must therefore consist of empty space, say Veverka and mission principal investigator Michael A'Hearn of the University of Maryland, College Park. “The closest [analog] may be a fresh, light New England snowfall, except dirty,” says team member Peter Schultz of Brown University in Providence, Rhode Island.

    A cruddy flying snowbank isn't that far from what planetary scientists foresaw, but they did expect to find that comets have been flying around pretty much unaltered since they formed 4.5 billion years ago. Close-up looks at comets Halley, Borrelly, and Wild 2 (Science, 5 October 2001, p. 27; 9 January 2004, p. 151) have revealed the expected surficial battering by impacts and craggy decay from loss of ice to solar heating. But Tempel 1 also has dramatic examples of layers, geologic strata of uncertain origin. “We're still puzzling through the layering,” says A'Hearn. One smooth area appears eaten away at its edges, revealing an older layer beneath with its own muted impact craters. No one is willing to say whether the layers formed when Tempel 1 did, or much later. They might have been laid down as the comet formed layer upon layer 4.6 billion years ago. The layers are reminiscent of those recently found on Saturn's cometlike moon Phoebe (Science, 18 June 2004, p. 1727).

    “Comet nuclei have clearly undergone geologically interesting processes,” says Veverka. That news excites the geologists, but it could frustrate the geochemists looking for unaltered cometary material in the Deep Impact debris. Their search has just begun.


    Coming Into Focus: A Universe Shaped By Violent Galaxies

    1. Robert Irion

    Astronomers and theorists still lack a detailed model of how the Milky Way and its cousins formed. But they now agree on one thing: Galaxies really blow

    SANTA CRUZ, CALIFORNIA—When theorists try to simulate how galaxies grow, they never quite get it right. In the computer world, spirals like our Milky Way rarely develop graceful disks with the sizes that astronomers see. The giant blobs called elliptical galaxies spawn new stars for too long. And models predict too many tiny galaxies, strewn through volumes where astronomers see mostly blackness.

    But in the last year, modelers have improved their galaxy recipes by adding generous portions of wanton violence, which alters the appearance and dynamics of galaxies in just the right ways. At all stages of the universe's growth, blasts from supernovas and surges of energy from massive black holes appear to dictate the fates of galaxies. These waves of unrest—often sparked by galactic collisions—wreak havoc on a galaxy's supply of star-forming gas but in dramatically different ways for big galaxies and small ones.

    Astronomers have learned to recognize this “feedback” in galaxies, and they now see it nearly everywhere they look. What's more, simulations that include feedback and a healthy dollop of galaxy mergers are producing the most realistic assemblies of virtual galaxies yet created. “Feedback is really the new frontier in understanding galaxy formation,” says astronomer S. Michael Fall of the Space Telescope Science Institute in Baltimore, Maryland. Still, the gaseous interplay that drives feedback is so complex—and so poorly understood—that no one yet considers the galaxy problem solved.

    The dark new framework

    Feedback in various guises suffused a crowded agenda for 250 scientists meeting* last month among the redwood trees of the University of California (UC), Santa Cruz. A bracing blend of leading theorists and observers focused on galaxies shining within a hidden network of dark matter that spans the cosmos.

    For more than two decades, astronomers have known that clumps of dark matter create the gravitational pits within which galaxies assemble. “It's almost a standard model,” says astrophysicist Joseph Silk of the University of Oxford, U.K. Indeed, the meeting celebrated the 60th birthdays of three UC Santa Cruz researchers who first conceived that model in a 1984 paper in Nature: astrophysicist George Blumenthal, astronomer Sandra Faber, and cosmologist Joel Primack. (The fourth author, astronomer Martin Rees of the University of Cambridge, U.K., could not attend.)

    The universe has grown stranger since then. Cosmologists now believe that “dark energy,” an utterly unknown force, drives an ever-accelerating expansion of space (Science, 2 September, p. 1482). But dark energy won little overt notice at the meeting, for it becomes important only on scales far larger than an individual galaxy. “It's the greatest discovery of the century, yet it has had very little effect on people like us,” Faber mused.

    Explosive spread.

    In a simulation of gas contained in galaxies 2.2 billion years after the big bang (left), winds from supernovas propel heavy elements deep into space (right).


    However, dark energy did set a framework in two areas. First, several speakers examined the history of galactic mergers. Nearby collisions fling arcs of stars into space and spark telltale bursts of starbirth. The signs aren't as clear at great distances, where astronomers try to reconstruct the younger universe from faint images. “Distant galaxies all look a little funky,” says astronomer Hans-Walter Rix of the Max Planck Institute for Astronomy in Heidelberg, Germany. With no clean way to spot mergers, he says, astronomers can't specify how much a given galaxy transformed itself by devouring its neighbors. But they suspect the collisions happened often, especially in rich galaxy clusters.

    Galactic makeovers are rarer today, says Matthias Steinmetz, director of the Astrophysical Institute Potsdam in Germany. The main culprit is dark energy, which has pushed galaxies farther apart than they otherwise would be. As a result, the universe of the last 7 billion years—roughly half the cosmic age—has provided a comparatively peaceful setting for galaxy growth. Many spirals today might not exist without dark energy, Steinmetz notes, because mergers damage their extended flattened disks.

    Still, astronomers have scoured space for imprints of mergers that do continue. New research suggests that elliptical galaxies—the behemoths of the cosmos—are far from inert. Astronomer Pieter van Dokkum of Yale University in New Haven, Connecticut, and colleagues took sensitive images of 126 ellipticals to search for disrupted patterns. Wide fans of stars and other collision remnants popped out for more than half of the galaxies. Van Dokkum's analysis suggests that elliptical galaxies add 10% more mass to their bloated retinues every billion years.

    “It's clear that a picture of galaxy formation without mergers as a central theme does not work,” says Rix.

    The red-and-blue divide

    These greedy acquisitions are just one feature of elliptical galaxies. More puzzling is how ellipticals produced dazzling bursts of new stars in the young universe—as early as a billion years after the big bang—and then shut down just as quickly (Science, 19 March 2004, p. 1750). Galaxy modelers have now generated results that look tantalizingly close to real observations.

    Researchers can also thank dark energy for this advance—at least indirectly. “It used to be that whenever you did a simulation, you had to run many of them because you didn't know the cosmological parameters,” says meeting co-organizer Avishai Dekel of Hebrew University in Jerusalem, Israel. “Now that we know [the proportions of] dark matter and dark energy, most effort goes into the physics of galaxy formation.”

    The progress is helping astronomers understand why galaxies occupy two broad color categories, like the states in U.S. electoral politics. Elliptical galaxies shine with the cool reddened light of ancient stars, with virtually no gas left to make more suns. On the other hand, disk galaxies brim with gas and sparkle with new stars, blazing with blue and ultraviolet vigor. Red galaxies can grow 10 to 100 times as massive as their blue cousins can. Ongoing surveys of tens of thousands of galaxies also show that red ones typically clump together, while blue ones spread far and wide across space.

    An ambitious new simulation has reproduced these galaxy properties, and many others. Astrophysicists led by Volker Springel and Simon White of the Max Planck Institute for Astrophysics in Garching, Germany, created the Millennium Run: a model containing 10 billion particles of dark matter, arrayed within a cube more than 2 billion light-years on a side. The team started with minuscule fluctuations in the distribution of matter, as reflected in the subtle patterns of the cosmic microwave background—the remnant heat of the big bang. Then, gravity and dark energy acted on those fluctuations for the age of the universe in simulated time. The result was a striking web of dark matter, bearing eerie resemblance to a neural network.

    Next, the team used a separate model of gas physics, stars, and the dynamics of massive black holes to track simulated galaxies within each knot of dark matter—about 18 million in all. As Springel and White reported at the meeting and in the 2 June issue of Nature, the physical properties of these galaxies neatly captured the red-and-blue divide. Notably, the old, massive red galaxies clustered tightly within the densest regions of dark matter, while the youngest blue galaxies spread out smoothly—the first such match for a large-scale model. “To me, this effect was quite unexpected,” says White.

    Lost in space.

    Stellar outbursts push hot gas out of a nearby dwarf galaxy (white outline).


    The monumental effort won fans, including Primack, who has modeled the growth of cosmic structure since the mid-1980s. “I'm absolutely blown away by the success of these huge simulations,” he says.

    Furious outpourings of energy were the key to making the model work, says Springel. “Feedback is essential for structure formation on all scales,” he notes. For instance, shock waves and winds blown by supernovas will churn up small to medium-sized galaxies and prevent them from forming as many stars as they would otherwise. This effect can short-circuit star formation in some dwarf galaxies before it takes hold, leaving nuggets of dark matter with little visible light.

    In bigger galaxies, even supernovas don't have enough oomph to propel gas into deep space. Instead, powerful beacons of radio energy—driven by matter cascading into supermassive black holes—rule the outcome. When galaxies reach a critical mass, this feedback heats gas so intensely that new stars can no longer form. The galaxies become “red and dead,” massive old ellipticals sitting at the hearts of giant galaxy clusters.

    Prepare for blowout

    Other new models zeroed in on this “blowout phase.” In a simulation led by graduate student Philip Hopkins and astrophysicist Lars Hernquist of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, galaxy cores turn “on” and “off” more erratically than thought. When two gas-rich galaxies collide, gravitational forces propel breathtaking streams of gas toward the newly merged black hole at the center. The hot disk of matter around the black hole emits as much radiation as physics allows, creating a hyperluminous quasar.

    The rich get richer.

    Streamers and fans of stars show huge elliptical galaxies have absorbed smaller companions.


    Movies of this process show that the quasar expels gas from the merging galaxies in what amounts to a detonation. It's aberrant behavior for an object that is relatively quiet for perhaps 99% of its history. Indeed, nearly every galaxy may have hosted a bright quasar throwing at least one such tantrum, Hernquist says. The model successfully explains the numbers of active quasars, how red galaxies evolve, and a strict relationship between the mass of a galaxy's central black hole and the spheroidal bulge of stars around it. The team has submitted its work to the Astrophysical Journal.

    Studies of galaxies on all scales bolster these models. For instance, x-ray images of nearby dwarfs show that some of them seem turned inside out by supernovas and winds from massive stars, ejecting gas beyond the galaxies' reach. And at the meeting, astronomer Charles Steidel of the California Institute of Technology in Pasadena reported on studies of the environments around extremely distant galaxies with one of the 10-meter Keck Telescopes in Hawaii. Rapid motions of heavy elements near the galaxies point to outward wind speeds of 500 to 600 kilometers per second, with much of the gas escaping the galactic clutches. In extreme cases, Steidel's team found quasars that have swept out space around them for tens of millions of light-years.

    Speakers agreed that the models and observations point to many galaxies growing in fits and starts, ultimately limited by violent feedback. But galactic gas is complex, and it's not clear how it responds to winds, shocks, and explosions. For example, most models at the meeting included a prescription that a galaxy of a certain mass will expel a certain percentage of its gas as it forms. “But there's no understanding of why,” says physicist Anthony Aguirre of UC, Santa Cruz. “For everything on the scale between a star and a galaxy, we're more or less clueless. It's really the gas that makes things bad.”

    For this reason, the impressive simulations do not yet offer a convincing explanation of what makes galaxies tick. As Oxford's Joseph Silk cautioned about the Millennium Run: “It is a beautiful picture of how [large-scale] structure forms, but is it right?” The answer, he says, awaits a deeper scrutiny of the physics of gas and stars close to home.

    • * Nearly Normal Galaxies in a Lambda-CDM Universe, 8-12 August 2005.

Log in to view full text

Log in through your institution

Log in through your institution