# News this Week

Science  05 May 2006:
Vol. 312, Issue 5774, pp. 668

# Texas Earmark Allots Millions to Disputed Theory of Gulf War Illness

1. Jennifer Couzin

Scientists usually bristle when U.S. legislators mandate a project that benefits their constituents. But Gulf War illness researchers are especially troubled by such a funding provision inserted by Senator Kay Bailey Hutchison (R-TX) in this year's budget for the Department of Veterans Affairs (VA). The $15 million earmark to the University of Texas (UT) Southwestern Medical Center in Dallas not only avoids the traditional peer-review process, but it also marks the rare—and possibly first ever—VA funding of a program outside its research network, and to a researcher whose theory of the debilitating illness hasn't won much scientific support. “The particular avenue of research being pursued is not one that has found much favor with the scientific community,” says Simon Wessely, director of the King's Centre for Military Health Research at King's College London. Adds John Feussner, a former head of VA research now at the Medical University of South Carolina in Charleston, “This takes money directly out of the VA research portfolio. … I can't think of any advantage” from the new Gulf War research program. The money will fund a new center at UT Southwestern, unveiled on 21 April. It exists thanks to Hutchison, who chairs the spending panel that sets the VA's budget and has long urged more government-funded research into Gulf War illness. Her priority “is getting the money to the person who can best help battle this illness,” says spokesperson Chris Paulitz. In her mind, that individual is epidemiologist Robert Haley, who for years has reported a strong link between exposure to neurotoxins, such as nerve gas and pesticides, and the puzzling cluster of symptoms that struck thousands of veterans after the 1990-'91 Gulf War. Haley was initially funded by former presidential candidate and businessman Ross Perot and later by the Department of Defense. He believes that Gulf War illness is “an encephalopathy” marked by abnormalities in brain structures and in the nervous system. Many troops, he believes, were exposed to low levels of nerve gas during the first Gulf War. Now, Haley expects to pin down how these toxins affect the brain, and how to ease their effects, once and for all. Certainly, there's no shortage of funds: Hutchison expects the center—which Haley says will be called the Gulf War Illness and Chemical Exposure Research Center—will receive$75 million from VA over 5 years. Haley says it will initially focus on brain imaging, a survey of veterans from the first Gulf War, animal studies, and a Gulf War illness research and treatment clinic at the Dallas VA Medical Center.

But “this is not a grant to Robert Haley,” he says. The dean of UT Southwestern's medical school, Alfred Gilman, will convene a merit review committee, and “all of our projects will go through” it, says Haley, adding that the committee's precise function hasn't been set. Traditional peer review as practiced by agencies such as VA and the National Institutes of Health, says Haley, has helped scientists take small steps forward. But it has failed to solve the enigma of Gulf War illness. “If we continue at this rate,” he says, “it's going to be 50 years before we help these people.”

Haley's Gulf War theories, however, put him in the minority. Animal studies disagree on whether low-dose neurotoxin exposure is deleterious in the long term, and the neurotoxin theory has come up short in expert reviews. In 2004, the Institute of Medicine (IOM) in Washington, D.C., concluded that “there is inadequate/insufficient evidence” to forge a link between exposure to low levels of sarin gas and the memory loss, muscle and joint pain, and other symptoms that characterize Gulf War illness. Wessely argues that British troops, which have the same rates of Gulf War illness as seen in Americans, were nowhere near the Khamisayah weapons depot in Iraq, the most cited example of suspected nerve gas exposure during the war. The IOM report notes that an attempt to replicate Haley's findings of genetic susceptibility to nerve gas proved unsuccessful.

A VA committee that included Haley came to a different conclusion. It reported in 2004 that neurotoxin exposure was a “probable” explanation for Gulf War illness and recommended that VA spend at least $60 million over 4 years on Gulf War illness research. The neurotoxin arena “is the most promising area for research at the present time,” says James Binns, a Vietnam veteran and Arizona businessman, who chaired the committee that wrote the report. VA agreed (Science, 19 November 2004, p. 1275) but never put up the money—until Hutchison's amendment compelled it to do so. Initial funding will be limited to UT Southwestern and other schools, generally in Dallas, with which Haley collaborates, he says. Joel Kupersmith, VA's chief research and development officer, calls the plan “an opportunity to move ahead on Gulf War research” and expressed “confidence” in UT Southwestern. But then again, VA had little choice but to move forward. “We follow what the laws and regulations are,” says Kupersmith. 2. BIOMEDICINE # Genes and Chronic Fatigue: How Strong Is the Evidence? 1. Jocelyn Kaiser The U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia, announced last month that it has cracked a medical mystery: Chronic fatigue syndrome (CFS) has a biological and genetic basis. CDC Director Julie Gerberding called the study “groundbreaking” and also hailed its novel methodology. These claims have attracted widespread media attention. But, like most aspects of CFS, the study and its findings are controversial. Some scientists think the agency is overstating the case for a link between the syndrome and genetic mutations. “Most complex-trait geneticists would interpret [these] findings more cautiously than the authors have,” says Patrick Sullivan, a psychiatric geneticist at the University of North Carolina, Chapel Hill. CFS is defined as severe fatigue lasting more than 6 months, accompanied by symptoms such as muscle pain and memory problems. It is thought to afflict at least 1 million Americans, mostly women. The lack of specific diagnostic criteria since CFS was first defined 20 years ago has led to debate over whether the cause could be an infectious agent, psychiatric, or something else—and made research funding for the disorder highly political. In 2000, a CDC division director lost his job after the agency diverted$12.9 million that Congress had instructed CDC to spend on CFS research to other infectious disease studies (Science, 7 January 2000, p. 22). The agency agreed to restore the money over 4 years and launch a major study.

The new project, led by William Reeves, CDC's lead CFS researcher (who had blown the whistle on the diverted funds), took an unusual approach. Instead of recruiting patients already diagnosed with CFS, CDC surveyed one-quarter of the population of Wichita, Kansas, by phone to find people suffering from severe fatigue. Several thousand then underwent screening at a clinic for CFS. The population-based aspect is “a big plus” because it avoids the possible bias in tapping a pool of patients seeking treatment for their problems, says Simon Wessely, who studies CFS and a similar disorder, Gulf War illness, at King's College London.

Out of this survey, 172 people, most of them white middle-aged women, were deemed to fit the criteria for CFS (58) or CFS-like illness (114). A total of 227 people, including 55 controls, then underwent an extensive 2-day battery of clinical measurements, including sleep studies, cognitive tests, biochemical analyses, and gene-expression studies on blood cells. This part of the study alone cost upward of $2 million, says Reeves. In another unusual step, CDC's Suzanne Vernon then handed this massive data set to four teams of outside epidemiologists, mathematicians, physicists, and other experts. They spent 6 months examining statistical patterns in the data. For instance, one group analyzed patient characteristics such as obesity, sleep disturbance, and depression and grouped them into four to six distinct subtypes; they also looked for different gene-expression patterns in these categories. Some groups also looked for associations between CFS and 43 common mutations in 11 genes involved in the hypothalamic-pituitary-adrenal axis, which controls the body's reaction to stress. The 14 papers were published last month in the journal Pharmacogenomics. The results, which include the finding that the patterns of expression of about two dozen genes involved in immune function, cell signaling, and other roles are different in CFS patients, provide what Harvard University CFS researcher Anthony Komaroff calls “solid evidence” for a biological basis of CFS. They dispel the notion that “this is a bunch of hysterical upper-class professional white women,” says Reeves. Other scientists are much more cautious. The gene-expression results, says Jonathan Kerr of Imperial College London, are “meaningless” because they don't demonstrate conclusively, using the polymerase chain reaction, that the genes' RNA is indeed expressed. After this step, says Kerr, 30% to 40% of genes could drop out. The most controversial assertion, however, is that the Wichita study has tied CFS to particular mutations in three genes, including the glucocorticoid receptor and one affecting serotonin levels. Genetic epidemiologists are skeptical for two reasons. First, the team looked for associations with just 43 gene variants; some other set of genes might have correlated just as closely, notes Nancy Cox of the University of Chicago in Illinois. Second, the researchers studied no more than 100 or so individuals with fatigue. The results, although they meet the threshold for statistical significance, are “very likely not robust,” says Sullivan. (Sullivan himself has co-authored twin studies finding a “modest” genetic component for CFS, although without pointing to a particular gene.) Reeves doesn't disagree: “One of our caveats is that it is a small study,” he says. CDC researchers are now planning to repeat the study with 100 CFS patients. Vernon says her group is also validating the gene-expression results and will hold another computational exercise next month at Duke University in Durham, North Carolina, with a larger data set. Meanwhile, Gerberding has suggested that the same multipronged approach could be applied to seek genetic links to other complex diseases such as autism. That's already being done for many other diseases, from cancer to schizophrenia, notes Sullivan, although the studies use much larger samples and search the entire genome for disease markers. That scale may never be possible for relatively uncommon diseases such as CFS, he says. And he and other human geneticists warn that it's unclear whether any conclusions can be drawn from gene hunts carried out on such very small sample sizes. 3. U.S. RESEARCH FUNDING # Industry Shrinks Academic Support 1. Yudhijit Bhattacharjee After 2 decades of steady increases, industrial funding for U.S. academic research is on the skids, according to a new report* from the National Science Foundation (NSF). University and industry officials say the 5% cumulative decline from 2002 to 2004—the first-ever 3-year slide for a funding source since NSF began compiling such data in the 1950s—reflects a slowing economy and shrinking company research budgets. But some fear the trend might continue even as the economy picks up unless companies and universities figure out how to share the fruits of industry-sponsored research. University officials see collaborations with the private sector as an increasingly important revenue source. But they also want to maximize income from technologies developed on campus. Institutions have become so aggressive in protecting intellectual property arising out of industry-funded projects, some industry representatives say, that negotiating research contracts is becoming more difficult and time-consuming. “Even if we come in with the ideas and the money, we are expected to pay a licensing fee for the product of research that we already paid for,” says Stanley Williams, a computer scientist at Hewlett-Packard Laboratories in Palo Alto, California. “Then we get into a negotiating dance that can take 2 years, by which time the idea is no longer viable.” That hard-nosed attitude could hurt universities in the long run by damaging their relationship with industry, says Susan Butts, director of external technology at Dow Chemical Co. and co-founder of a national university-industry group that is working to improve partnerships between the two sectors. “Most universities receive more industry funding for research than total revenue from licensing inventions,” she says. (The ratio is 3 to 1 according to the 2004 survey of the Association of Technology Managers.) “It doesn't make sense to jeopardize funding to try to increase licensing income.” Michael Pratt, director of corporate business development at Boston University, says most companies have fewer dollars available for outside research when times are tough. Susan Gaud, chair of the Industrial Research Institute, adds that “funding research doesn't fit into” the increasing emphasis on short-term corporate productivity. But like Butts, she sees protracted negotiations as a contributing factor behind the current decline and a stumbling block for the future. Williams says he wouldn't be surprised if the trend continues, fueled in part by a receptive audience among academics in France, Russia, and China. “We call it research by purchase order. You get on the phone, you talk to a professor, and the professor says it'll cost this much,” he says. “The research can start the next day. And we own everything that comes out of it.” 4. QUANTUM OPTICS # A New Way to Beat the Limits on Shrinking Transistors? 1. Adrian Cho A new lithography scheme could sidestep a fundamental limit of classical optics and open the way to drawing ultraprecise patterns with simple lasers, a team of electrical engineers and physicists predicts. If it works, the scheme would allow chipmakers to continue to shrink the transistors on microchips using standard technologies. “It seems quite cool, and it could be an advance in the field,” say Jonathan Dowling, a mathematical physicist at Louisiana State University in Baton Rouge. Still, he says, researchers have a long way to go before they can put the plan into practice. Chipmakers “write” the pattern of transistors and circuits on a microchip by shining laser light onto a film called a photoresist that lies atop a silicon wafer. According to classical optics, the light cannot create a pattern with details smaller than half its wavelength—the so-called diffraction limit. So to shrink transistors, chipmakers must use light of shorter wavelengths, such as ultraviolet light and soft x-rays. The problem is, ordinary lenses don't work at such short wavelengths. Physicists know that, in theory, they can beat the diffraction limit through quantum weirdness. They split a beam of light, send the two halves on paths of different lengths, and bring them back together at the surface of the target. The recombining beams can interfere with one another to make a pattern of bright and dark regions. The trick is to create a quantum connection called “entanglement” between a pair of photons traveling the two paths so that the duo acts like a single photon of twice the energy and half the wavelength. The interfering beams can then write details onto the photoresist that are half the size of the diffraction limit. Entangling more photons produces smaller features still. Such “quantum lithography” has yet to find its way into production lines, however, largely because it's hard to produce the entangled photons. Now, electrical engineer Philip Hemmer and physicist Suhail Zubairy of Texas A&M University in College Station and colleagues have concocted a scheme that they say can produce the same result with ordinary unentangled laser light. Instead of splitting a beam, the researchers propose shining two “signal” lasers of slightly different wavelengths onto a surface coated with photoresist. They would also shine two “drive” lasers onto the same spot. In their scheme, the molecules of the photoresist can absorb only a very specific amount of energy—less than twice the energy of either signal beam. If the drive beams are tuned just right, a molecule would simultaneously absorb two photons from one signal beam or the other—but not one from each—while spitting a single photon into one of the drive beams. The photoresist would effectively be patterned by beams of twice the energy and half the wavelength of each signal beam, yielding details half as small as the diffraction limit would allow, the researchers report in the 28 April Physical Review Letters. The challenge will be to find just the right absorbing material for the photoresist, says Dowling, one of the inventors of the entanglement approach. “I call this conservation of magic,” he says. “Either you have to have a magic state of light, or you need a magic absorber.” Yanhua Shih of the University of Maryland, Baltimore County, adds that the researchers have shown only that they can make a tight pattern of parallel lines. In principle, the new technique may not be able to make more elaborate patterns, Shih says. Others say that, in theory, any pattern can be fashioned from an appropriate superposition of stripes. Hemmer says he's working on an experimental realization of the scheme. Time will tell whether it's one giant half-step for technologists. 5. IMMUNOLOGY # Differences in Immune Cell "Brakes" May Explain Chimp-Human Split on AIDS 1. Jon Cohen A new study that compares the immune responses of chimps and humans offers yet more compelling evidence that subtle differences in gene activity can result in big distinctions between the two species. The researchers, led by hematologist Ajit Varki of the University of California, San Diego (UCSD), suggest that their findings may explain why chimps and other great apes do not typically develop AIDS when infected with HIV, cirrhosis after infection with hepatitis B or C viruses, or any of several other diseases common in humans. Pathologist Kurt Benirschke, who also is at UCSD but did not participate in this study, calls the new work “terrific” and “really great science.” As they report in the 1 May Proceedings of the National Academy of Sciences, Varki and co-workers studied proteins called Siglecs that his lab co-discovered in the 1990s. Many immune cells express Siglecs (which stands for sialic acid—recognizing Ig-superfamily lectins), and some of them appear to calm the immune response by preventing a process of immune cell expansion known as activation. Humans and apes share the same Siglec genes, but Varki's group explored whether they were turned on to the same degree in T lymphocytes taken from humans, chimps, gorillas, and bonobos. Using monoclonal antibodies to various Siglecs, the researchers found that although the T cells of people from many different geographic and ethnic backgrounds sported low levels of the Siglecs or none at all, the ape T cells produced clearly detectable amounts of the molecules. When they genetically engineered the human cells to express high levels of one key Siglec, they found that, as predicted, T cell activation was inhibited. Conversely, they cranked up activation of chimp cells by using an antibody to block that same Siglec on them. “I think what's happening is that Siglecs are providing a brake in ape T cells,” says Varki. “Human T cells seem to have lost these brakes.” Varki and his co-authors speculate that early humans faced novel pathogens as they migrated into new areas, which may have created pressure for hyperactivated T cells to evolve. Varki's team notes that AIDS, chronic hepatitis B and C, rheumatoid arthritis, bronchial asthma, and type 1 diabetes are all T cell—mediated diseases that are linked to overactivation of the immune cells—and none appear to afflict apes. Varki says it may ultimately be possible to develop a therapy that turns up expression of Siglecs in humans with these diseases, dampening activation and preventing symptoms. But he emphasizes that this study only hints at that possibility. “At the moment it's all in vitro,” stresses Varki. “But it is all internally consistent with what we know about the biology.” The new insights on Siglecs may also help avoid tragedies like the one that recently occurred in a U.K. drug trial (Science, 24 March, p. 1688). The study involved a monoclonal antibody that stimulates T cell activation and proved safe in monkeys. But when given at much lower doses to six humans, it quickly caused serious illness. “When it comes to the immune system, be careful about predicting whether a primate model will predict human responses, especially for T cells,” cautions Varki, noting that an in vitro comparison of rhesus and human cells similar to his study might have revealed the stark differences between the species. AIDS researchers are taking note of the new work as well. In AIDS, T cell activation appears to play a central role in the destruction of the immune system as it signals the expansion of lymphocytes bearing CD4 receptors, the key cells that HIV targets and destroys. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, points out that such activation in people creates more targets for HIV. “If you didn't have that robust activation, you wouldn't provide a fertile environment for HIV to replicate,” he says. “Varki makes an interesting story.” Immunologist Gene Shearer of the National Cancer Institute in Bethesda, Maryland, who studies how HIV causes immune destruction by activating CD4 cells that then commit suicide, agrees that the lower Siglec expression in human T cells could be an important new factor to study in AIDS. Varki's results have already prompted him to “do some rethinking.” 6. NEUROBIOLOGY # Despite Mutated Gene, Mouse Circadian Clock Keeps on Ticking 1. Greg Miller If you opened up a clock and took out the most important-looking cog in its works, how well do you suppose the clock would work? A team of neurobiologists recently tried an analogous experiment with the biological clocks of mice, knocking out a gene thought to encode a crucial part of the molecular machinery that generates circadian rhythms. To their great surprise, the clock didn't stop. The mutant mice had nearly normal daily cycles of activity, even in total darkness, and the activity levels of other clock genes continued to wax and wane. “It was pretty heretical,” says Steven Reppert, who reports the findings in the 4 May issue of Neuron along with David Weaver and Jason DeBruyne of the University of Massachusetts Medical School in Worcester and other colleagues. “On first reading, this is a striking finding,” says Steve Kay, a clock researcher at the Scripps Research Institute in San Diego, California. Still, Kay and others aren't quite ready to toss out everything they know about molecular clocks. “There's a distinct possibility that this result has a simple explanation,” Kay says. The current model of the mammalian circadian clock has at its heart two transcription factors, CLOCK and BMAL1, that drive a 24-hour cycle of gene expression. The details are complicated, but the general idea is that CLOCK and BMAL1, bound together, kick off each cycle by spurring transcription of several other genes, whose protein products accumulate and interact in the ensuing hours, ultimately switching off CLOCK-BMAL1 to bring the cycle to an end. Researchers already knew that knocking out the Bmal1 gene abolishes circadian rhythms, and they have long assumed that knocking out Clock would do the same. Joseph Takahashi and colleagues at Northwestern University in Evanston, Illinois, first identified Clock in 1994 and reported that mice with a mutated version of the gene that produces an altered CLOCK protein have extraordinarily long circadian cycles, up to 28 hours (Science, 29 April 1994, p. 719). Such mice also have disrupted molecular rhythms in the suprachiasmatic nucleus, the site of the master body clock in the brain. But creating mice with a completely disabled Clock gene has proven tricky, Takahashi says: “My lab has been working on it all this time.” Now Reppert's team has succeeded. “To our surprise and shock, the animals ended up having almost no change in circadian behavior,” he says. The standard way to test a mouse's circadian clock is to put it in a cage with a running wheel and repeatedly turn the lights on for 12 hours, then off for 12 hours. The nocturnal animals normally spend far more time on the wheel during the dark periods, and thanks to the internal clock, they maintain this activity pattern even when the lights are off for weeks at a time. CLOCK-deficient mice did the same, for up to 6 weeks of solid darkness, Reppert and colleagues found. Further investigations revealed that the cyclical activity of other clock-related genes continued in the brains and livers of the CLOCK-deficient mice, although with some abnormalities. “In general, it looked like the molecular clock still moved forward in the brain,” Reppert says. The circadian clock in the liver appeared to be more substantially altered. “The work is very good, and the result is somewhat unexpected,” says Takahashi. However, he adds, “I don't think it says CLOCK is not playing an important role normally.” In Takahashi's view, the new work simply shows that when CLOCK isn't there, something else can take its place. The reason for the severe disruption in the original Clock mutants may be that the altered CLOCK protein kept this substitute from interacting with BMAL1, he says. Reppert and colleagues suspect that the substitute is a related transcription factor called NPAS2; they report in Neuron that it binds with BMAL1 in the brains of CLOCK-deficient mice. Now the researchers are working to create mice missing both CLOCK and NPAS2. That's a crucial experiment, says Kay: “If the mice are rhythmic, then our field will need a paradigm shift.” 7. ENERGY RESEARCH # Industry Conservation Programs Face White House Cuts 1. Eli Kintisch Mechanical engineer Christophe Beckermann has developed software to help the U.S. metal-casting industry reduce waste and save energy by modeling how cracks form as the metal cools. At a time when politicians are demanding that the country become more energy efficient, the research by his 10-person team at the University of Iowa, Iowa City, seems like a sure-fire winner. Think again. Last week, representatives of energy-intensive industries from steel to chemicals came to Washington, D.C., to lobby against a 15% cut proposed by the Bush Administration in the program Industries of the Future (IOF) that funds Beckermann and other researchers on projects including papermaking studies and combustion research. The cut is part of an$87 million bite the White House wants to take out of the $606 million program for efficiency research and technology at the Department of Energy (DOE), the latest twist in what supporters call a “death spiral” for a program that was once much larger. A report last year by the National Academies' National Research Council found “significant cumulative energy and cost savings” in the seven energy-intensive industries covered by IOF, a sector that together consumes three-quarters of the energy used by U.S. industry. And last week, a report from the American Council for an Energy-Efficient Economy described improvements developed by efficiency researchers such as Beckermann, who benefit from matching funds from industry on each grant, as “low-hanging fruit.” But DOE officials argue that private companies should pick up the tab for that harvest. “With high energy prices, there's incentive for industry to take on some of these programs,” says Jacques Beaudry-Losique, DOE's industrial technology manager. Toni Grobstein Marechaux, a former director of the academies' manufacturing and engineering design board, says the IOF program conducts research that industry wouldn't do on its own. Corporate leaders are reluctant to wait for the payoff from most efficiency research, she says, even with energy prices rising. The government also helps companies avoid potential antitrust issues when they work collaboratively with competitors. In addition, the program subsidizes work that some heavy industries simply lack the funds to carry out. Most metal-casting firms are small and employ few if any engineers, says Beckermann, citing as proof the crude modeling software that now exists. IOF's supporters also worry about the next generation of energy-focused industrial engineers in the wake of the Administration's proposed 35% cut in the related$6.4 million Industrial Assessment Centers program that allows undergraduates to be energy-saving consultants to manufacturing plants. Patrick Johnson, a manager at the glass giant Corning, appreciates the value of the energy audits that the students perform. “Sometimes Corning employees have blinders on,” he says.

DOE officials say the cuts are a necessary consequence of limited resources. Those budget pressures place a higher priority on new, long-term research into fuel sources such as cellulosic ethanol or nuclear power (Science, 10 March, p. 1369). On the other end of the spectrum, they note that 72 teams have completed assessments of energy-intensive manufacturing sites under the department's current “Save Energy Now” campaign. Despite the proposed cuts, they add, the IOF program will still support work with industry in fields including nanomaterials and catalysis.

Congressional staffers say that DOE's proposed cuts are penny-wise and pound-foolish. “Congress will have to restore funding to some of these accounts,” says a House appropriations staffer. Beckermann hopes they do, calling the program “a model for industry-government collaboration.”

8. CLIMATOLOGY

# A Tempestuous Birth for Hurricane Climatology

1. Richard A. Kerr

Launching a new field of science is always tricky, but starting up the study of hurricane behavior over the decades—in the wake of Katrina—has proved challenging indeed

MONTEREY, CALIFORNIA—It's not every day one can witness the inception of a new field. Researchers attending the 27th Conference on Hurricanes and Tropical Meteorology late last month* got their usual diet of potential vorticity analyses and Madden-Julian oscillations. But they also debated a newborn science created to assess whether tropical cyclones—variously called hurricanes, typhoons, or cyclones—have strengthened under global warming.

Until Science and Nature published two papers last year contending that tropical cyclones around the world had strengthened, few scientists paid much attention to long-term variations in such storms. In those papers, a couple of academic meteorologists took the long view of tropical cyclone records compiled day-by-day by weather forecasters. From those weather records, the meteorologists extracted storm climatologies, statistical histories of storm behavior that could be searched for any change over the decades. Unexpectedly, they found a surge in tropical cyclone intensity—and if it continued under global warming, they concluded, it would noticeably amplify the destruction in coming decades. In the wake of Hurricane Katrina, those analyses were enough to spark a highly public and sometimes raucous new field: hurricane climatology.

In theory, tropical cyclones aren't supposed to be noticeably stronger after a few decades of warming, which explains why no one had been searching for a trend in the ups and downs of the storm record. Then, at an October 2004 press conference, meteorologist Kevin Trenberth of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, reacted to claims that the particularly active 2004 season in the Atlantic Ocean was just part of a natural cycle that pumps up Atlantic storms every few decades. He and the others on the panel “didn't think that was right,” says Trenberth. “We thought global warming was playing a role.” The burst of hurricane activity since 1995 just seemed too strong to be entirely natural.

Others thought Trenberth was the one getting it wrong. “We were rather skeptical” of Trenberth's remarks, says Peter Webster, who specializes in monsoons and other tropical phenomena. “We thought Kevin was sounding his trumpet a little bit.” So Webster, a meteorologist at the Georgia Institute of Technology in Atlanta, and his colleagues looked at records of maximum wind speed of storms around the world as gauged from satellite images.

In the 16 September 2005 issue of Science (p. 1844), less than 3 weeks after Hurricane Katrina ravaged New Orleans, Webster and colleagues reported that in fact the abundance of tropical cyclones had not increased between 1970 and 2004. But the number of the strongest storms—those in categories 4 and 5—had jumped 57% from the first half of the period to the second. That reinforced findings by meteorologist and hurricane specialist Kerry Emanuel of the Massachusetts Institute of Technology in Cambridge. He had reported in the 4 August 2005 issue of Nature that the total power released during the lives of Atlantic and western North Pacific storms had risen between 40% and 50% from the first half of a 45-year record to the last half.

Both Webster and Emanuel were taken aback by their own findings. “I changed my mind in a big way” about how much the warming could be intensifying storms, says Emanuel. But it wasn't just because of the apparent upward trend of storm intensity. When Emanuel looked at how storm power and ocean temperature had varied, “what I found startled me,” he told the conference. In the area just north of the equator in the Atlantic Ocean, where most hurricanes get their start, the power released during the lifetimes of storms is “spectacularly well correlated with sea surface temperature,” says Emanuel. Hurricane intensity had risen along with temperature over the past half-century, even matching ups and downs along the way.

The region where Atlantic hurricanes develop has in turn warmed in step with the Northern Hemisphere for the past half-century, Emanuel noted. And that warming is widely held to be driven at least in part by rising greenhouse gases. Two studies may not be enough to prove that Trenberth is right about greenhouse warming driving storm activity, but both Emanuel and Webster now believe they see a strengthening of tropical cyclones suspiciously in synchrony with global warming.

## Stormier models

Newly minted hurricane climatologists at the conference got some support from climate modelers. Kazuyoshi Oouchi of the Advanced Earth Science and Technology Organization in Yokohama, Japan, and his colleagues presented results from highly detailed climate simulations run on Japan's Earth Simulator, the world's most powerful supercomputer devoted to earth sciences. Global climate models typically calculate climate at points 200 kilometers or more apart. The resulting pictures of climate are too fuzzy to pick up anything as small as a tropical cyclone. But the Japanese group simulated the present climate and the greenhouse-warmed climate near the end of the century at a resolution of just 20 kilometers, thanks to the Earth Simulator's power. That was detailed enough for tropical cyclones to appear in the model, allowing the researchers to roughly gauge their intensities. In the warmer world, the total number of storms over the globe had actually decreased by 30%. But the number of the rarer category 3 and 4 storms had increased substantially, not unlike Webster's observational results.

Additional support for intensification also came from a new, independent analysis of wind data, mentioned in passing at the conference. Climate researchers Ryan Sriver and Matthew Huber of Purdue University in West Lafayette, Indiana, will soon report in Geophysical Research Letters on how they measured the power released by storms by using a compilation of the world's weather data developed at the European Centre for Medium-Range Weather Forecasting in Reading, U.K. They found a 25% increase in storm power between the first half of the 45-year record and the second, consistent with Emanuel's analyses.

## Even stormier objections

Intensifying tropical cyclones possibly driven by the greenhouse might sound like one more chapter in the familiar global warming story—melting glaciers, rising seas, searing heat waves—but some read it differently. The U.S. National Oceanic and Atmospheric Administration (NOAA) stated last year in press releases that “longer-term climate change appears to be a minor factor” in “the most devastating hurricane season the country has experienced in modern times.” The surge in Atlantic hurricane activity since 1995 is the latest upswing in a natural cycle, the releases said. As the Atlantic Ocean warms and wind patterns shift, hurricanes increase for a decade or two until a lull sets in again.

NOAA's seemingly official pronouncements sounded moderate next to that of William Gray of Colorado State University (CSU) in Fort Collins, an elder statesman of the hurricane community. Well known for his forecasts of the coming hurricane season, Gray pushes natural cycles with a vengeance. “There's definitely a big multidecadal cycle going on,” he said in his conference talk, not just in the Atlantic Ocean but around the world. “This isn't global-warming induced; this is natural. We may go for a few years, and then it's going to cool.” And with the cooling, hurricanes will calm down again, he said, just as they have before.

The warming of recent decades and the jump in hurricanes are being driven by a surging ocean current that brings warm water northward into the Atlantic Ocean, Gray insisted. Climate models in which mounting greenhouse gases drive global warming have simply got it wrong, he said; greenhouse gases are feeble agents of warming. “I'm a great believer in computer models,” he said. The audience laughed skeptically. No, he assured them, “I am—out to 10 or 12 days. But when you get to the climate scale, you get into a can of worms. Any climate person who believes in a model should have their head examined. They all stink.”

“This was highly entertaining,” observed meteorologist Gregory Holland of NCAR from the audience, “but unfortunately you obfuscate the real issue.” Holland, a graduate student of Gray's at CSU in the early 1980s, is second author on Webster's Science paper. He went on to staunchly defend climate models. “Right now, they're providing a very good, rational picture of the situation,” he concluded. Supportive applause rose from the audience.

Later, during a panel discussion, Emanuel questioned even the existence of a natural cycle of Atlantic warming and cooling, at least one that influences the development of hurricanes. He believes a misstep in a classic analysis—a 2001 Science paper on which Gray was an author—tended to create a cycle where none exists. By subtracting a linear trend from a record of rising temperature that actually contained the nonlinear, upward-curved trend of global warming, the analysis created an oscillation, he said. In the end, Emanuel finds “no evidence for natural cycles in late summer tropical Atlantic sea surface on these time scales,” which for him leaves global warming as the leading candidate for a driver.

## A haphazard record

Emanuel's line of argument caught critics by surprise, and his challenge to a purely natural driver for hurricane activity went largely unanswered at the conference. NOAA scientists such as meteorologist Christopher Landsea of the National Hurricane Center in Miami, Florida—another former Gray student—claimed that NOAA public affairs staff members writing the press releases had overstated the case for a natural cycle. And the warming may well be largely human-induced, said Landsea. The question, he argued, is not what's causing tropical warming, but how much of an impact does that warming have on hurricane intensity? Not much, he suspects. According to theory and computer modeling, by now the intensification should be only a sixth of what Webster and Emanuel have reported, he noted. Rather than a real strengthening, Landsea said, Emanuel and Webster may well be seeing a fictitious one created by a deeply flawed hurricane record.

Emanuel disagrees. “They tend to count [the anomalous strengthening] against the observations,” he says. “I count it against the theory, although I helped develop the theory.”

That moved the debate to the observational record of tropical cyclone intensity, where attendees found considerable grounds for agreement: The record is far, far from perfect. Hurricane climate researchers have the same problem as climate researchers had when they began searching for signs of global warming. Weather forecasters created the surface temperature record as they went about their business predicting the next day's weather. They never planned to string their twice-daily measurements into a century-long record, so they had no qualms about moving their thermometers from place to place, and they paid no mind when heat-retaining cities grew up around them.

Likewise, forecasters' observations of tropical cyclones “weren't designed to be climate records,” notes Landsea. And to make matters even worse for hurricane climatologists, hurricane forecasters almost never directly measure storm intensity—the maximum wind speed 10 meters above the surface averaged over 1 minute. Instead, they usually gauge maximum wind speed indirectly.

Indirect measurements of storm intensity haven't always been done well or frequently, either. By the time Hurricane Carol hit in 1954, Landsea noted, forecasters were flying into storms—if they weren't too strong—and judging wind speed by looking down at waves on the ocean. Sometimes they would estimate maximum winds by making their best guess of how winds would change from their aircraft's altitude to the near-surface. Such sampling wouldn't have caught Hurricane Wilma's 1-day leap last fall from minimal hurricane to category 5, he said. That took eight types of measurements made 280 times over 12 days, and some of those measurements required air-dropped instrumentation and satellites. Even today, says Landsea, “there are big discrepancies about how strong a storm was.” The U.S. and Japanese typhoon warning centers for the Pacific Ocean—where no aircraft reconnaissance is done now—at times differ in their estimates by as much as two categories.

At the conference, a half-dozen speakers documented the sad state of tropical cyclone intensity measurements. Two groups—led by John Knaff of CSU and by Bruce Harper of Systems Engineering Australia Proprietary Limited in Brisbane—attempted to correct intensity records from parts of the Pacific Ocean for now-obvious errors. Both reanalyses reduced the upward trend of storm intensity. Knaff's work, in particular, suggests that Emanuel's and Webster's studies “may have been premature,” says Landsea. “The database wasn't up to it.”

On the other hand, the reanalyses did not eliminate the trend. “The data's not very good,” agreed Webster. “However, to say it's all artificial is an exaggeration. We would have had to have misidentified 160 to 180 category 4's and 5's.” He doubts they were off by that much.

The discussions at the conference, although informative, did not change many minds. “There are persuasive arguments on both sides,” says Hugh Willoughby of Florida International University in Miami and former director of NOAA's Hurricane Research Division. “Honestly, we don't know” who's right, he says. “That's the real story.” Tropical cyclones probably intensify under warmer climates, he says, but most likely not as much as Webster and Emanuel believe. “We don't know where we are in the middle.” That's what the new field of hurricane climatology hopes to find out.

• * 27th Conference on Hurricanes and Tropical Meteorology, 24–28 April, Monterey, California (sponsored by the American Meteorological Society).

9. NUCLEAR PHYSICS

# Indian Angst Over Atomic Pact

1. Pallava Bagla

Facing new restrictions on where and with whom they work, some Indian nuclear scientists assert that a historic agreement will narrow their research horizons

MUMBAI— Over the years, physicists at two scientific powerhouses here—the Tata Institute of Fundamental Research (TIFR) and the Bhabha Atomic Research Centre (BARC)—have enjoyed a tight-knit collaboration in superconductivity research, producing two dozen papers and a handful of patents. But their days of working side by side may be numbered. Under a controversial nuclear deal with the United States, India has agreed to separate its vast nuclear establishment into civilian and military programs—and BARC is on the military list. If the pact is finalized, BARC scientists may no longer be allowed to work with their civilian counterparts. If TIFR were forced to sever all linkages with BARC, says TIFR director Sabyasachi “Shobo” Bhattacharya, a condensed matter physicist, “it would be a real tragedy, but the worst-case scenario may not unfold.” Others are not so optimistic.

In March, India and the United States unveiled a landmark agreement that would end India's status as a pariah for having snubbed the Nuclear Nonproliferation Treaty and acquired a nuclear arsenal. Under the agreement, India would be allowed to import civilian nuclear technology in exchange for submitting key facilities to international inspections. Even before the ink was dry, however, some U.S. nonproliferation analysts assailed the pact as a bad deal that would not make the world safer. Indian scientists and officials, meanwhile, hailed it as a triumph (Science, 10 March, p. 1356).

These concerns could undermine negotiations aimed at tweaking the agreement to make it more palatable to the U.S. Congress, which must amend laws for the deal to go through. A Senate hearing on 26 April challenged the deal; Democrat Joseph Biden of Delaware demanded “a full list of India's civilian facilities.” Already, India has spurned a U.S. request to forswear nuclear tests. Indian negotiators, contacted by Science, say they will raise the possibility of continued scientific collaborations between BARC and TIFR—possibly in negotiations with the International Atomic Energy Agency (IAEA) in Vienna, Austria, and during upcoming talks with the United States on a special Indian inspection regime.

Inspections “are extremely intrusive, immensely disruptive, and are often conducted in an atmosphere vitiated by suspicion,” contends Iyengar, who served on IAEA's board of governors. India has listed nine of the Department of Atomic Energy's two dozen or so research facilities as civilian (see table, below). “Any or all research [at these labs] may come under scrutiny,” Iyengar says. IAEA declined to comment on the ongoing negotiations.

Nuclear researchers also object to U.S. demands that India erect firewalls to prevent skilled scientists from serving in both civilian and military labs. Annaswamy Narayana Prasad, a mechanical engineer and former BARC director, says that “effective separation of the two components is a near impossibility.” The proposal would require the Department of Atomic Energy to label each of its 65,000 staff members as civilian or military personnel, possibly freeze movement between the sectors by 2014, and perhaps end collaborations. Says Avinash Khare, a theoretical physicist at the Institute of Physics in Bhubaneswar, this “would be a real calamity.”

India also may have to speed up work on new training and research facilities. Already, India has agreed to decommission two of three research reactors at BARC by 2010. The remaining reactor, the 100-megawatt Dhruva, would be overwhelmed by current users.

## Resurrection

Sackett and her senior staff also had to contemplate Mount Stromlo's long-term future. The 50-inch Great Melbourne Telescope had been scheduled for another major upgrade. Instead, it was redesigned from the ground up, keeping the same size reflector but with a vastly enlarged field of view and a snazzy 300-million-pixel digital camera. Schmidt says the new scope, called SkyMapper, will be capable of surveying the entire southern sky in two nights, something that would “take a lifetime” with current telescopes. A primary target will be keeping an eye out for extremely rare nearby supernovas. “SkyMapper is really going to be quite revolutionary for [studying] supernovas,” predicts the University of Hawaii's Tonry.

SkyMapper, expected to see first light in December or January, is being built at Siding Spring. A high-bandwidth optical fiber cable connection will allow control of the instrument from Mount Stromlo, where increasing light pollution from Canberra is degrading observing conditions.

Inevitably, there have been stumbling blocks. Demolition and construction were delayed by the need to develop a master plan that considered the historical and ecological aspects of development, as Mount Stromlo is an Australian Heritage Site. And ANU has sued its insurers, alleging that they have so far paid only a fraction of what the university says it is entitled to. A shortage of funds is delaying restoration of the Commonwealth Solar Observatory and the replacement of the 74-inch telescope. “That has left a big hole in our observing program,” says Schmidt.

But 3 years after the fire, RSAA has passed or is reaching a number of milestones. NIFS was delivered to Gemini North last fall, and the Adaptive Optics Imager for Gemini South is nearing completion. Once SkyMapper comes on line, it will largely be business as usual again for Mount Stromlo's astronomers. But their remarkable recovery suggests that the fire of 2003, if anything, has broadened their horizons.