News this Week

Science  05 May 2006:
Vol. 312, Issue 5774, pp. 668
  1. VETERANS ADMINISTRATION

    Texas Earmark Allots Millions to Disputed Theory of Gulf War Illness

    1. Jennifer Couzin

    Scientists usually bristle when U.S. legislators mandate a project that benefits their constituents. But Gulf War illness researchers are especially troubled by such a funding provision inserted by Senator Kay Bailey Hutchison (R-TX) in this year's budget for the Department of Veterans Affairs (VA). The $15 million earmark to the University of Texas (UT) Southwestern Medical Center in Dallas not only avoids the traditional peer-review process, but it also marks the rare—and possibly first ever—VA funding of a program outside its research network, and to a researcher whose theory of the debilitating illness hasn't won much scientific support.

    “The particular avenue of research being pursued is not one that has found much favor with the scientific community,” says Simon Wessely, director of the King's Centre for Military Health Research at King's College London. Adds John Feussner, a former head of VA research now at the Medical University of South Carolina in Charleston, “This takes money directly out of the VA research portfolio. … I can't think of any advantage” from the new Gulf War research program.

    Setting priorities.

    Senator Kay Bailey Hutchison (R-TX) (standing, right) and epidemiologist Robert Haley (standing, far left) help launch a new Gulf War illness research center. Hutchison's $15 million earmark more than doubles the Veteran Administration's spending in this area.

    CREDITS: (PHOTO) UNIVERSITY OF TEXAS SOUTHWESTERN MEDICAL CENTER; (SOURCE) U.S. DEPARTMENT OF VETERANS AFFAIRS

    The money will fund a new center at UT Southwestern, unveiled on 21 April. It exists thanks to Hutchison, who chairs the spending panel that sets the VA's budget and has long urged more government-funded research into Gulf War illness. Her priority “is getting the money to the person who can best help battle this illness,” says spokesperson Chris Paulitz. In her mind, that individual is epidemiologist Robert Haley, who for years has reported a strong link between exposure to neurotoxins, such as nerve gas and pesticides, and the puzzling cluster of symptoms that struck thousands of veterans after the 1990-'91 Gulf War.

    Haley was initially funded by former presidential candidate and businessman Ross Perot and later by the Department of Defense. He believes that Gulf War illness is “an encephalopathy” marked by abnormalities in brain structures and in the nervous system. Many troops, he believes, were exposed to low levels of nerve gas during the first Gulf War.

    Now, Haley expects to pin down how these toxins affect the brain, and how to ease their effects, once and for all. Certainly, there's no shortage of funds: Hutchison expects the center—which Haley says will be called the Gulf War Illness and Chemical Exposure Research Center—will receive $75 million from VA over 5 years. Haley says it will initially focus on brain imaging, a survey of veterans from the first Gulf War, animal studies, and a Gulf War illness research and treatment clinic at the Dallas VA Medical Center.

    But “this is not a grant to Robert Haley,” he says. The dean of UT Southwestern's medical school, Alfred Gilman, will convene a merit review committee, and “all of our projects will go through” it, says Haley, adding that the committee's precise function hasn't been set. Traditional peer review as practiced by agencies such as VA and the National Institutes of Health, says Haley, has helped scientists take small steps forward. But it has failed to solve the enigma of Gulf War illness. “If we continue at this rate,” he says, “it's going to be 50 years before we help these people.”

    Haley's Gulf War theories, however, put him in the minority. Animal studies disagree on whether low-dose neurotoxin exposure is deleterious in the long term, and the neurotoxin theory has come up short in expert reviews. In 2004, the Institute of Medicine (IOM) in Washington, D.C., concluded that “there is inadequate/insufficient evidence” to forge a link between exposure to low levels of sarin gas and the memory loss, muscle and joint pain, and other symptoms that characterize Gulf War illness. Wessely argues that British troops, which have the same rates of Gulf War illness as seen in Americans, were nowhere near the Khamisayah weapons depot in Iraq, the most cited example of suspected nerve gas exposure during the war. The IOM report notes that an attempt to replicate Haley's findings of genetic susceptibility to nerve gas proved unsuccessful.

    A VA committee that included Haley came to a different conclusion. It reported in 2004 that neurotoxin exposure was a “probable” explanation for Gulf War illness and recommended that VA spend at least $60 million over 4 years on Gulf War illness research. The neurotoxin arena “is the most promising area for research at the present time,” says James Binns, a Vietnam veteran and Arizona businessman, who chaired the committee that wrote the report. VA agreed (Science, 19 November 2004, p. 1275) but never put up the money—until Hutchison's amendment compelled it to do so. Initial funding will be limited to UT Southwestern and other schools, generally in Dallas, with which Haley collaborates, he says.

    Joel Kupersmith, VA's chief research and development officer, calls the plan “an opportunity to move ahead on Gulf War research” and expressed “confidence” in UT Southwestern. But then again, VA had little choice but to move forward. “We follow what the laws and regulations are,” says Kupersmith.

  2. BIOMEDICINE

    Genes and Chronic Fatigue: How Strong Is the Evidence?

    1. Jocelyn Kaiser

    The U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia, announced last month that it has cracked a medical mystery: Chronic fatigue syndrome (CFS) has a biological and genetic basis. CDC Director Julie Gerberding called the study “groundbreaking” and also hailed its novel methodology. These claims have attracted widespread media attention. But, like most aspects of CFS, the study and its findings are controversial. Some scientists think the agency is overstating the case for a link between the syndrome and genetic mutations. “Most complex-trait geneticists would interpret [these] findings more cautiously than the authors have,” says Patrick Sullivan, a psychiatric geneticist at the University of North Carolina, Chapel Hill.

    CFS is defined as severe fatigue lasting more than 6 months, accompanied by symptoms such as muscle pain and memory problems. It is thought to afflict at least 1 million Americans, mostly women. The lack of specific diagnostic criteria since CFS was first defined 20 years ago has led to debate over whether the cause could be an infectious agent, psychiatric, or something else—and made research funding for the disorder highly political. In 2000, a CDC division director lost his job after the agency diverted $12.9 million that Congress had instructed CDC to spend on CFS research to other infectious disease studies (Science, 7 January 2000, p. 22). The agency agreed to restore the money over 4 years and launch a major study.

    Roots of fatigue.

    CDC Director Julie Gerberding applauds new study on chronic fatigue syndrome. One part divided 111 women into subgroups that correspond to different gene-expression patterns.

    CREDITS: (PHOTO) CDC; (SOURCE) ADAPTED FROM L. CARMEL ET AL., PHARMACOGENOMICS 7, 375–386 (2006)

    The new project, led by William Reeves, CDC's lead CFS researcher (who had blown the whistle on the diverted funds), took an unusual approach. Instead of recruiting patients already diagnosed with CFS, CDC surveyed one-quarter of the population of Wichita, Kansas, by phone to find people suffering from severe fatigue. Several thousand then underwent screening at a clinic for CFS. The population-based aspect is “a big plus” because it avoids the possible bias in tapping a pool of patients seeking treatment for their problems, says Simon Wessely, who studies CFS and a similar disorder, Gulf War illness, at King's College London.

    Out of this survey, 172 people, most of them white middle-aged women, were deemed to fit the criteria for CFS (58) or CFS-like illness (114). A total of 227 people, including 55 controls, then underwent an extensive 2-day battery of clinical measurements, including sleep studies, cognitive tests, biochemical analyses, and gene-expression studies on blood cells. This part of the study alone cost upward of $2 million, says Reeves.

    In another unusual step, CDC's Suzanne Vernon then handed this massive data set to four teams of outside epidemiologists, mathematicians, physicists, and other experts. They spent 6 months examining statistical patterns in the data. For instance, one group analyzed patient characteristics such as obesity, sleep disturbance, and depression and grouped them into four to six distinct subtypes; they also looked for different gene-expression patterns in these categories. Some groups also looked for associations between CFS and 43 common mutations in 11 genes involved in the hypothalamic-pituitary-adrenal axis, which controls the body's reaction to stress. The 14 papers were published last month in the journal Pharmacogenomics.

    The results, which include the finding that the patterns of expression of about two dozen genes involved in immune function, cell signaling, and other roles are different in CFS patients, provide what Harvard University CFS researcher Anthony Komaroff calls “solid evidence” for a biological basis of CFS. They dispel the notion that “this is a bunch of hysterical upper-class professional white women,” says Reeves.

    Other scientists are much more cautious. The gene-expression results, says Jonathan Kerr of Imperial College London, are “meaningless” because they don't demonstrate conclusively, using the polymerase chain reaction, that the genes' RNA is indeed expressed. After this step, says Kerr, 30% to 40% of genes could drop out.

    The most controversial assertion, however, is that the Wichita study has tied CFS to particular mutations in three genes, including the glucocorticoid receptor and one affecting serotonin levels. Genetic epidemiologists are skeptical for two reasons. First, the team looked for associations with just 43 gene variants; some other set of genes might have correlated just as closely, notes Nancy Cox of the University of Chicago in Illinois. Second, the researchers studied no more than 100 or so individuals with fatigue. The results, although they meet the threshold for statistical significance, are “very likely not robust,” says Sullivan. (Sullivan himself has co-authored twin studies finding a “modest” genetic component for CFS, although without pointing to a particular gene.)

    Reeves doesn't disagree: “One of our caveats is that it is a small study,” he says. CDC researchers are now planning to repeat the study with 100 CFS patients. Vernon says her group is also validating the gene-expression results and will hold another computational exercise next month at Duke University in Durham, North Carolina, with a larger data set.

    Meanwhile, Gerberding has suggested that the same multipronged approach could be applied to seek genetic links to other complex diseases such as autism. That's already being done for many other diseases, from cancer to schizophrenia, notes Sullivan, although the studies use much larger samples and search the entire genome for disease markers. That scale may never be possible for relatively uncommon diseases such as CFS, he says. And he and other human geneticists warn that it's unclear whether any conclusions can be drawn from gene hunts carried out on such very small sample sizes.

  3. U.S. RESEARCH FUNDING

    Industry Shrinks Academic Support

    1. Yudhijit Bhattacharjee

    After 2 decades of steady increases, industrial funding for U.S. academic research is on the skids, according to a new report* from the National Science Foundation (NSF). University and industry officials say the 5% cumulative decline from 2002 to 2004—the first-ever 3-year slide for a funding source since NSF began compiling such data in the 1950s—reflects a slowing economy and shrinking company research budgets. But some fear the trend might continue even as the economy picks up unless companies and universities figure out how to share the fruits of industry-sponsored research.

    Company flight?

    Disagreements over the ownership of intellectual property may be limiting industrial support to U.S. universities as other sources increase their contributions.

    SOURCE: NSF

    University officials see collaborations with the private sector as an increasingly important revenue source. But they also want to maximize income from technologies developed on campus. Institutions have become so aggressive in protecting intellectual property arising out of industry-funded projects, some industry representatives say, that negotiating research contracts is becoming more difficult and time-consuming.

    “Even if we come in with the ideas and the money, we are expected to pay a licensing fee for the product of research that we already paid for,” says Stanley Williams, a computer scientist at Hewlett-Packard Laboratories in Palo Alto, California. “Then we get into a negotiating dance that can take 2 years, by which time the idea is no longer viable.”

    That hard-nosed attitude could hurt universities in the long run by damaging their relationship with industry, says Susan Butts, director of external technology at Dow Chemical Co. and co-founder of a national university-industry group that is working to improve partnerships between the two sectors. “Most universities receive more industry funding for research than total revenue from licensing inventions,” she says. (The ratio is 3 to 1 according to the 2004 survey of the Association of Technology Managers.) “It doesn't make sense to jeopardize funding to try to increase licensing income.”

    Michael Pratt, director of corporate business development at Boston University, says most companies have fewer dollars available for outside research when times are tough. Susan Gaud, chair of the Industrial Research Institute, adds that “funding research doesn't fit into” the increasing emphasis on short-term corporate productivity. But like Butts, she sees protracted negotiations as a contributing factor behind the current decline and a stumbling block for the future.

    Williams says he wouldn't be surprised if the trend continues, fueled in part by a receptive audience among academics in France, Russia, and China. “We call it research by purchase order. You get on the phone, you talk to a professor, and the professor says it'll cost this much,” he says. “The research can start the next day. And we own everything that comes out of it.”

  4. QUANTUM OPTICS

    A New Way to Beat the Limits on Shrinking Transistors?

    1. Adrian Cho

    A new lithography scheme could sidestep a fundamental limit of classical optics and open the way to drawing ultraprecise patterns with simple lasers, a team of electrical engineers and physicists predicts. If it works, the scheme would allow chipmakers to continue to shrink the transistors on microchips using standard technologies.

    A fine line.

    A photoresist that absorbs two photons from one signal laser or the other would produce stripes half as wide as the diffraction limit.

    CREDIT: P. HUEY/SCIENCE

    “It seems quite cool, and it could be an advance in the field,” say Jonathan Dowling, a mathematical physicist at Louisiana State University in Baton Rouge. Still, he says, researchers have a long way to go before they can put the plan into practice.

    Chipmakers “write” the pattern of transistors and circuits on a microchip by shining laser light onto a film called a photoresist that lies atop a silicon wafer. According to classical optics, the light cannot create a pattern with details smaller than half its wavelength—the so-called diffraction limit. So to shrink transistors, chipmakers must use light of shorter wavelengths, such as ultraviolet light and soft x-rays. The problem is, ordinary lenses don't work at such short wavelengths.

    Physicists know that, in theory, they can beat the diffraction limit through quantum weirdness. They split a beam of light, send the two halves on paths of different lengths, and bring them back together at the surface of the target. The recombining beams can interfere with one another to make a pattern of bright and dark regions. The trick is to create a quantum connection called “entanglement” between a pair of photons traveling the two paths so that the duo acts like a single photon of twice the energy and half the wavelength. The interfering beams can then write details onto the photoresist that are half the size of the diffraction limit. Entangling more photons produces smaller features still.

    Such “quantum lithography” has yet to find its way into production lines, however, largely because it's hard to produce the entangled photons. Now, electrical engineer Philip Hemmer and physicist Suhail Zubairy of Texas A&M University in College Station and colleagues have concocted a scheme that they say can produce the same result with ordinary unentangled laser light.

    Instead of splitting a beam, the researchers propose shining two “signal” lasers of slightly different wavelengths onto a surface coated with photoresist. They would also shine two “drive” lasers onto the same spot. In their scheme, the molecules of the photoresist can absorb only a very specific amount of energy—less than twice the energy of either signal beam. If the drive beams are tuned just right, a molecule would simultaneously absorb two photons from one signal beam or the other—but not one from each—while spitting a single photon into one of the drive beams. The photoresist would effectively be patterned by beams of twice the energy and half the wavelength of each signal beam, yielding details half as small as the diffraction limit would allow, the researchers report in the 28 April Physical Review Letters.

    The challenge will be to find just the right absorbing material for the photoresist, says Dowling, one of the inventors of the entanglement approach. “I call this conservation of magic,” he says. “Either you have to have a magic state of light, or you need a magic absorber.” Yanhua Shih of the University of Maryland, Baltimore County, adds that the researchers have shown only that they can make a tight pattern of parallel lines. In principle, the new technique may not be able to make more elaborate patterns, Shih says. Others say that, in theory, any pattern can be fashioned from an appropriate superposition of stripes.

    Hemmer says he's working on an experimental realization of the scheme. Time will tell whether it's one giant half-step for technologists.

  5. IMMUNOLOGY

    Differences in Immune Cell "Brakes" May Explain Chimp-Human Split on AIDS

    1. Jon Cohen

    A new study that compares the immune responses of chimps and humans offers yet more compelling evidence that subtle differences in gene activity can result in big distinctions between the two species. The researchers, led by hematologist Ajit Varki of the University of California, San Diego (UCSD), suggest that their findings may explain why chimps and other great apes do not typically develop AIDS when infected with HIV, cirrhosis after infection with hepatitis B or C viruses, or any of several other diseases common in humans. Pathologist Kurt Benirschke, who also is at UCSD but did not participate in this study, calls the new work “terrific” and “really great science.”

    Disease dodger?

    Higher levels of Siglecs expressed by ape T cells may explain why they do not suffer many common ailments that plague humans.

    CREDIT: ADAPTED FROM D. H. NGUYEN ET AL., PNAS

    As they report in the 1 May Proceedings of the National Academy of Sciences, Varki and co-workers studied proteins called Siglecs that his lab co-discovered in the 1990s. Many immune cells express Siglecs (which stands for sialic acid—recognizing Ig-superfamily lectins), and some of them appear to calm the immune response by preventing a process of immune cell expansion known as activation. Humans and apes share the same Siglec genes, but Varki's group explored whether they were turned on to the same degree in T lymphocytes taken from humans, chimps, gorillas, and bonobos.

    Using monoclonal antibodies to various Siglecs, the researchers found that although the T cells of people from many different geographic and ethnic backgrounds sported low levels of the Siglecs or none at all, the ape T cells produced clearly detectable amounts of the molecules. When they genetically engineered the human cells to express high levels of one key Siglec, they found that, as predicted, T cell activation was inhibited. Conversely, they cranked up activation of chimp cells by using an antibody to block that same Siglec on them. “I think what's happening is that Siglecs are providing a brake in ape T cells,” says Varki. “Human T cells seem to have lost these brakes.” Varki and his co-authors speculate that early humans faced novel pathogens as they migrated into new areas, which may have created pressure for hyperactivated T cells to evolve.

    Varki's team notes that AIDS, chronic hepatitis B and C, rheumatoid arthritis, bronchial asthma, and type 1 diabetes are all T cell—mediated diseases that are linked to overactivation of the immune cells—and none appear to afflict apes. Varki says it may ultimately be possible to develop a therapy that turns up expression of Siglecs in humans with these diseases, dampening activation and preventing symptoms. But he emphasizes that this study only hints at that possibility. “At the moment it's all in vitro,” stresses Varki. “But it is all internally consistent with what we know about the biology.”

    The new insights on Siglecs may also help avoid tragedies like the one that recently occurred in a U.K. drug trial (Science, 24 March, p. 1688). The study involved a monoclonal antibody that stimulates T cell activation and proved safe in monkeys. But when given at much lower doses to six humans, it quickly caused serious illness. “When it comes to the immune system, be careful about predicting whether a primate model will predict human responses, especially for T cells,” cautions Varki, noting that an in vitro comparison of rhesus and human cells similar to his study might have revealed the stark differences between the species.

    AIDS researchers are taking note of the new work as well. In AIDS, T cell activation appears to play a central role in the destruction of the immune system as it signals the expansion of lymphocytes bearing CD4 receptors, the key cells that HIV targets and destroys. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, points out that such activation in people creates more targets for HIV. “If you didn't have that robust activation, you wouldn't provide a fertile environment for HIV to replicate,” he says. “Varki makes an interesting story.”

    Immunologist Gene Shearer of the National Cancer Institute in Bethesda, Maryland, who studies how HIV causes immune destruction by activating CD4 cells that then commit suicide, agrees that the lower Siglec expression in human T cells could be an important new factor to study in AIDS. Varki's results have already prompted him to “do some rethinking.”

  6. NEUROBIOLOGY

    Despite Mutated Gene, Mouse Circadian Clock Keeps on Ticking

    1. Greg Miller

    If you opened up a clock and took out the most important-looking cog in its works, how well do you suppose the clock would work? A team of neurobiologists recently tried an analogous experiment with the biological clocks of mice, knocking out a gene thought to encode a crucial part of the molecular machinery that generates circadian rhythms. To their great surprise, the clock didn't stop.

    Running on time.

    CLOCK-deficient mice maintain a daily schedule of wheel running (black) even in constant darkness (gray areas).

    CREDIT: J. P. DEBRUYNE ET AL., NEURON 50, 465–477 (4 MAY 2006)

    The mutant mice had nearly normal daily cycles of activity, even in total darkness, and the activity levels of other clock genes continued to wax and wane. “It was pretty heretical,” says Steven Reppert, who reports the findings in the 4 May issue of Neuron along with David Weaver and Jason DeBruyne of the University of Massachusetts Medical School in Worcester and other colleagues. “On first reading, this is a striking finding,” says Steve Kay, a clock researcher at the Scripps Research Institute in San Diego, California. Still, Kay and others aren't quite ready to toss out everything they know about molecular clocks. “There's a distinct possibility that this result has a simple explanation,” Kay says.

    The current model of the mammalian circadian clock has at its heart two transcription factors, CLOCK and BMAL1, that drive a 24-hour cycle of gene expression. The details are complicated, but the general idea is that CLOCK and BMAL1, bound together, kick off each cycle by spurring transcription of several other genes, whose protein products accumulate and interact in the ensuing hours, ultimately switching off CLOCK-BMAL1 to bring the cycle to an end. Researchers already knew that knocking out the Bmal1 gene abolishes circadian rhythms, and they have long assumed that knocking out Clock would do the same.

    Joseph Takahashi and colleagues at Northwestern University in Evanston, Illinois, first identified Clock in 1994 and reported that mice with a mutated version of the gene that produces an altered CLOCK protein have extraordinarily long circadian cycles, up to 28 hours (Science, 29 April 1994, p. 719). Such mice also have disrupted molecular rhythms in the suprachiasmatic nucleus, the site of the master body clock in the brain. But creating mice with a completely disabled Clock gene has proven tricky, Takahashi says: “My lab has been working on it all this time.”

    Now Reppert's team has succeeded. “To our surprise and shock, the animals ended up having almost no change in circadian behavior,” he says. The standard way to test a mouse's circadian clock is to put it in a cage with a running wheel and repeatedly turn the lights on for 12 hours, then off for 12 hours. The nocturnal animals normally spend far more time on the wheel during the dark periods, and thanks to the internal clock, they maintain this activity pattern even when the lights are off for weeks at a time. CLOCK-deficient mice did the same, for up to 6 weeks of solid darkness, Reppert and colleagues found.

    Further investigations revealed that the cyclical activity of other clock-related genes continued in the brains and livers of the CLOCK-deficient mice, although with some abnormalities. “In general, it looked like the molecular clock still moved forward in the brain,” Reppert says. The circadian clock in the liver appeared to be more substantially altered.

    “The work is very good, and the result is somewhat unexpected,” says Takahashi. However, he adds, “I don't think it says CLOCK is not playing an important role normally.” In Takahashi's view, the new work simply shows that when CLOCK isn't there, something else can take its place. The reason for the severe disruption in the original Clock mutants may be that the altered CLOCK protein kept this substitute from interacting with BMAL1, he says.

    Reppert and colleagues suspect that the substitute is a related transcription factor called NPAS2; they report in Neuron that it binds with BMAL1 in the brains of CLOCK-deficient mice. Now the researchers are working to create mice missing both CLOCK and NPAS2. That's a crucial experiment, says Kay: “If the mice are rhythmic, then our field will need a paradigm shift.”

  7. ENERGY RESEARCH

    Industry Conservation Programs Face White House Cuts

    1. Eli Kintisch

    Mechanical engineer Christophe Beckermann has developed software to help the U.S. metal-casting industry reduce waste and save energy by modeling how cracks form as the metal cools. At a time when politicians are demanding that the country become more energy efficient, the research by his 10-person team at the University of Iowa, Iowa City, seems like a sure-fire winner.

    Think again. Last week, representatives of energy-intensive industries from steel to chemicals came to Washington, D.C., to lobby against a 15% cut proposed by the Bush Administration in the program Industries of the Future (IOF) that funds Beckermann and other researchers on projects including papermaking studies and combustion research. The cut is part of an $87 million bite the White House wants to take out of the $606 million program for efficiency research and technology at the Department of Energy (DOE), the latest twist in what supporters call a “death spiral” for a program that was once much larger.

    Either ore.

    The Department of Energy says it needs to cut industrial efficiency studies of basic industries such as steel casting to fund more promising research.

    CREDIT: PHOTOS.COM

    A report last year by the National Academies' National Research Council found “significant cumulative energy and cost savings” in the seven energy-intensive industries covered by IOF, a sector that together consumes three-quarters of the energy used by U.S. industry. And last week, a report from the American Council for an Energy-Efficient Economy described improvements developed by efficiency researchers such as Beckermann, who benefit from matching funds from industry on each grant, as “low-hanging fruit.” But DOE officials argue that private companies should pick up the tab for that harvest. “With high energy prices, there's incentive for industry to take on some of these programs,” says Jacques Beaudry-Losique, DOE's industrial technology manager.

    Toni Grobstein Marechaux, a former director of the academies' manufacturing and engineering design board, says the IOF program conducts research that industry wouldn't do on its own. Corporate leaders are reluctant to wait for the payoff from most efficiency research, she says, even with energy prices rising. The government also helps companies avoid potential antitrust issues when they work collaboratively with competitors. In addition, the program subsidizes work that some heavy industries simply lack the funds to carry out. Most metal-casting firms are small and employ few if any engineers, says Beckermann, citing as proof the crude modeling software that now exists.

    IOF's supporters also worry about the next generation of energy-focused industrial engineers in the wake of the Administration's proposed 35% cut in the related $6.4 million Industrial Assessment Centers program that allows undergraduates to be energy-saving consultants to manufacturing plants. Patrick Johnson, a manager at the glass giant Corning, appreciates the value of the energy audits that the students perform. “Sometimes Corning employees have blinders on,” he says.

    DOE officials say the cuts are a necessary consequence of limited resources. Those budget pressures place a higher priority on new, long-term research into fuel sources such as cellulosic ethanol or nuclear power (Science, 10 March, p. 1369). On the other end of the spectrum, they note that 72 teams have completed assessments of energy-intensive manufacturing sites under the department's current “Save Energy Now” campaign. Despite the proposed cuts, they add, the IOF program will still support work with industry in fields including nanomaterials and catalysis.

    Congressional staffers say that DOE's proposed cuts are penny-wise and pound-foolish. “Congress will have to restore funding to some of these accounts,” says a House appropriations staffer. Beckermann hopes they do, calling the program “a model for industry-government collaboration.”

  8. CLIMATOLOGY

    A Tempestuous Birth for Hurricane Climatology

    1. Richard A. Kerr

    Launching a new field of science is always tricky, but starting up the study of hurricane behavior over the decades—in the wake of Katrina—has proved challenging indeed

    Rising threat?

    The number of Atlantic hurricanes (left) jumped in 1995, but globally the number of tropical cyclones is steady. The debate is over whether storms have been getting stronger as the world warms.

    CREDIT: ROBERT MUNTEFERING/GETTY IMAGES

    MONTEREY, CALIFORNIA—It's not every day one can witness the inception of a new field. Researchers attending the 27th Conference on Hurricanes and Tropical Meteorology late last month* got their usual diet of potential vorticity analyses and Madden-Julian oscillations. But they also debated a newborn science created to assess whether tropical cyclones—variously called hurricanes, typhoons, or cyclones—have strengthened under global warming.

    Until Science and Nature published two papers last year contending that tropical cyclones around the world had strengthened, few scientists paid much attention to long-term variations in such storms. In those papers, a couple of academic meteorologists took the long view of tropical cyclone records compiled day-by-day by weather forecasters. From those weather records, the meteorologists extracted storm climatologies, statistical histories of storm behavior that could be searched for any change over the decades. Unexpectedly, they found a surge in tropical cyclone intensity—and if it continued under global warming, they concluded, it would noticeably amplify the destruction in coming decades. In the wake of Hurricane Katrina, those analyses were enough to spark a highly public and sometimes raucous new field: hurricane climatology.

    Blown away.

    Tropical storm power and destruction go up with the cube of wind speed.

    CREDIT: ANNIE GRIFFITHS BELT/NATIONAL GEOGRAPHIC/GETTY MAGES

    For some, a turnabout

    In theory, tropical cyclones aren't supposed to be noticeably stronger after a few decades of warming, which explains why no one had been searching for a trend in the ups and downs of the storm record. Then, at an October 2004 press conference, meteorologist Kevin Trenberth of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, reacted to claims that the particularly active 2004 season in the Atlantic Ocean was just part of a natural cycle that pumps up Atlantic storms every few decades. He and the others on the panel “didn't think that was right,” says Trenberth. “We thought global warming was playing a role.” The burst of hurricane activity since 1995 just seemed too strong to be entirely natural.

    Others thought Trenberth was the one getting it wrong. “We were rather skeptical” of Trenberth's remarks, says Peter Webster, who specializes in monsoons and other tropical phenomena. “We thought Kevin was sounding his trumpet a little bit.” So Webster, a meteorologist at the Georgia Institute of Technology in Atlanta, and his colleagues looked at records of maximum wind speed of storms around the world as gauged from satellite images.

    In the 16 September 2005 issue of Science (p. 1844), less than 3 weeks after Hurricane Katrina ravaged New Orleans, Webster and colleagues reported that in fact the abundance of tropical cyclones had not increased between 1970 and 2004. But the number of the strongest storms—those in categories 4 and 5—had jumped 57% from the first half of the period to the second. That reinforced findings by meteorologist and hurricane specialist Kerry Emanuel of the Massachusetts Institute of Technology in Cambridge. He had reported in the 4 August 2005 issue of Nature that the total power released during the lives of Atlantic and western North Pacific storms had risen between 40% and 50% from the first half of a 45-year record to the last half.

    Both Webster and Emanuel were taken aback by their own findings. “I changed my mind in a big way” about how much the warming could be intensifying storms, says Emanuel. But it wasn't just because of the apparent upward trend of storm intensity. When Emanuel looked at how storm power and ocean temperature had varied, “what I found startled me,” he told the conference. In the area just north of the equator in the Atlantic Ocean, where most hurricanes get their start, the power released during the lifetimes of storms is “spectacularly well correlated with sea surface temperature,” says Emanuel. Hurricane intensity had risen along with temperature over the past half-century, even matching ups and downs along the way.

    The region where Atlantic hurricanes develop has in turn warmed in step with the Northern Hemisphere for the past half-century, Emanuel noted. And that warming is widely held to be driven at least in part by rising greenhouse gases. Two studies may not be enough to prove that Trenberth is right about greenhouse warming driving storm activity, but both Emanuel and Webster now believe they see a strengthening of tropical cyclones suspiciously in synchrony with global warming.

    Stormier models

    Newly minted hurricane climatologists at the conference got some support from climate modelers. Kazuyoshi Oouchi of the Advanced Earth Science and Technology Organization in Yokohama, Japan, and his colleagues presented results from highly detailed climate simulations run on Japan's Earth Simulator, the world's most powerful supercomputer devoted to earth sciences. Global climate models typically calculate climate at points 200 kilometers or more apart. The resulting pictures of climate are too fuzzy to pick up anything as small as a tropical cyclone. But the Japanese group simulated the present climate and the greenhouse-warmed climate near the end of the century at a resolution of just 20 kilometers, thanks to the Earth Simulator's power. That was detailed enough for tropical cyclones to appear in the model, allowing the researchers to roughly gauge their intensities. In the warmer world, the total number of storms over the globe had actually decreased by 30%. But the number of the rarer category 3 and 4 storms had increased substantially, not unlike Webster's observational results.

    Additional support for intensification also came from a new, independent analysis of wind data, mentioned in passing at the conference. Climate researchers Ryan Sriver and Matthew Huber of Purdue University in West Lafayette, Indiana, will soon report in Geophysical Research Letters on how they measured the power released by storms by using a compilation of the world's weather data developed at the European Centre for Medium-Range Weather Forecasting in Reading, U.K. They found a 25% increase in storm power between the first half of the 45-year record and the second, consistent with Emanuel's analyses.

    Cause and effect?

    As late-summer water temperatures rose in the tropical Atlantic, where hurricanes get started, the power released by Atlantic hurricanes rose too.

    CREDITS: (SOURCE) K. EMANUEL/MIT; (RIGHT) ALAN SCHEIN PHOTOGRAPHY/CORBIS; (BACKGROUND) NASA

    Even stormier objections

    Intensifying tropical cyclones possibly driven by the greenhouse might sound like one more chapter in the familiar global warming story—melting glaciers, rising seas, searing heat waves—but some read it differently. The U.S. National Oceanic and Atmospheric Administration (NOAA) stated last year in press releases that “longer-term climate change appears to be a minor factor” in “the most devastating hurricane season the country has experienced in modern times.” The surge in Atlantic hurricane activity since 1995 is the latest upswing in a natural cycle, the releases said. As the Atlantic Ocean warms and wind patterns shift, hurricanes increase for a decade or two until a lull sets in again.

    NOAA's seemingly official pronouncements sounded moderate next to that of William Gray of Colorado State University (CSU) in Fort Collins, an elder statesman of the hurricane community. Well known for his forecasts of the coming hurricane season, Gray pushes natural cycles with a vengeance. “There's definitely a big multidecadal cycle going on,” he said in his conference talk, not just in the Atlantic Ocean but around the world. “This isn't global-warming induced; this is natural. We may go for a few years, and then it's going to cool.” And with the cooling, hurricanes will calm down again, he said, just as they have before.

    The warming of recent decades and the jump in hurricanes are being driven by a surging ocean current that brings warm water northward into the Atlantic Ocean, Gray insisted. Climate models in which mounting greenhouse gases drive global warming have simply got it wrong, he said; greenhouse gases are feeble agents of warming. “I'm a great believer in computer models,” he said. The audience laughed skeptically. No, he assured them, “I am—out to 10 or 12 days. But when you get to the climate scale, you get into a can of worms. Any climate person who believes in a model should have their head examined. They all stink.”

    “This was highly entertaining,” observed meteorologist Gregory Holland of NCAR from the audience, “but unfortunately you obfuscate the real issue.” Holland, a graduate student of Gray's at CSU in the early 1980s, is second author on Webster's Science paper. He went on to staunchly defend climate models. “Right now, they're providing a very good, rational picture of the situation,” he concluded. Supportive applause rose from the audience.

    Later, during a panel discussion, Emanuel questioned even the existence of a natural cycle of Atlantic warming and cooling, at least one that influences the development of hurricanes. He believes a misstep in a classic analysis—a 2001 Science paper on which Gray was an author—tended to create a cycle where none exists. By subtracting a linear trend from a record of rising temperature that actually contained the nonlinear, upward-curved trend of global warming, the analysis created an oscillation, he said. In the end, Emanuel finds “no evidence for natural cycles in late summer tropical Atlantic sea surface on these time scales,” which for him leaves global warming as the leading candidate for a driver.

    Storm central.

    Planes were able to penetrate Katrina's eye wall (left) and drop probes through the storm (lower left), greatly improving storm-intensity measurements.

    CREDITS (TOP TO BOTTOM): LCDR SILAH/NOAA AIRCRAFT OPERATIONS CENTER (AOC); NOAA AOC

    A haphazard record

    Emanuel's line of argument caught critics by surprise, and his challenge to a purely natural driver for hurricane activity went largely unanswered at the conference. NOAA scientists such as meteorologist Christopher Landsea of the National Hurricane Center in Miami, Florida—another former Gray student—claimed that NOAA public affairs staff members writing the press releases had overstated the case for a natural cycle. And the warming may well be largely human-induced, said Landsea. The question, he argued, is not what's causing tropical warming, but how much of an impact does that warming have on hurricane intensity? Not much, he suspects. According to theory and computer modeling, by now the intensification should be only a sixth of what Webster and Emanuel have reported, he noted. Rather than a real strengthening, Landsea said, Emanuel and Webster may well be seeing a fictitious one created by a deeply flawed hurricane record.

    Emanuel disagrees. “They tend to count [the anomalous strengthening] against the observations,” he says. “I count it against the theory, although I helped develop the theory.”

    That moved the debate to the observational record of tropical cyclone intensity, where attendees found considerable grounds for agreement: The record is far, far from perfect. Hurricane climate researchers have the same problem as climate researchers had when they began searching for signs of global warming. Weather forecasters created the surface temperature record as they went about their business predicting the next day's weather. They never planned to string their twice-daily measurements into a century-long record, so they had no qualms about moving their thermometers from place to place, and they paid no mind when heat-retaining cities grew up around them.

    Likewise, forecasters' observations of tropical cyclones “weren't designed to be climate records,” notes Landsea. And to make matters even worse for hurricane climatologists, hurricane forecasters almost never directly measure storm intensity—the maximum wind speed 10 meters above the surface averaged over 1 minute. Instead, they usually gauge maximum wind speed indirectly.

    Indirect measurements of storm intensity haven't always been done well or frequently, either. By the time Hurricane Carol hit in 1954, Landsea noted, forecasters were flying into storms—if they weren't too strong—and judging wind speed by looking down at waves on the ocean. Sometimes they would estimate maximum winds by making their best guess of how winds would change from their aircraft's altitude to the near-surface. Such sampling wouldn't have caught Hurricane Wilma's 1-day leap last fall from minimal hurricane to category 5, he said. That took eight types of measurements made 280 times over 12 days, and some of those measurements required air-dropped instrumentation and satellites. Even today, says Landsea, “there are big discrepancies about how strong a storm was.” The U.S. and Japanese typhoon warning centers for the Pacific Ocean—where no aircraft reconnaissance is done now—at times differ in their estimates by as much as two categories.

    At the conference, a half-dozen speakers documented the sad state of tropical cyclone intensity measurements. Two groups—led by John Knaff of CSU and by Bruce Harper of Systems Engineering Australia Proprietary Limited in Brisbane—attempted to correct intensity records from parts of the Pacific Ocean for now-obvious errors. Both reanalyses reduced the upward trend of storm intensity. Knaff's work, in particular, suggests that Emanuel's and Webster's studies “may have been premature,” says Landsea. “The database wasn't up to it.”

    On the other hand, the reanalyses did not eliminate the trend. “The data's not very good,” agreed Webster. “However, to say it's all artificial is an exaggeration. We would have had to have misidentified 160 to 180 category 4's and 5's.” He doubts they were off by that much.

    The discussions at the conference, although informative, did not change many minds. “There are persuasive arguments on both sides,” says Hugh Willoughby of Florida International University in Miami and former director of NOAA's Hurricane Research Division. “Honestly, we don't know” who's right, he says. “That's the real story.” Tropical cyclones probably intensify under warmer climates, he says, but most likely not as much as Webster and Emanuel believe. “We don't know where we are in the middle.” That's what the new field of hurricane climatology hopes to find out.

    • * 27th Conference on Hurricanes and Tropical Meteorology, 24–28 April, Monterey, California (sponsored by the American Meteorological Society).

  9. NUCLEAR PHYSICS

    Indian Angst Over Atomic Pact

    1. Pallava Bagla

    Facing new restrictions on where and with whom they work, some Indian nuclear scientists assert that a historic agreement will narrow their research horizons

    MUMBAI— Over the years, physicists at two scientific powerhouses here—the Tata Institute of Fundamental Research (TIFR) and the Bhabha Atomic Research Centre (BARC)—have enjoyed a tight-knit collaboration in superconductivity research, producing two dozen papers and a handful of patents. But their days of working side by side may be numbered. Under a controversial nuclear deal with the United States, India has agreed to separate its vast nuclear establishment into civilian and military programs—and BARC is on the military list. If the pact is finalized, BARC scientists may no longer be allowed to work with their civilian counterparts. If TIFR were forced to sever all linkages with BARC, says TIFR director Sabyasachi “Shobo” Bhattacharya, a condensed matter physicist, “it would be a real tragedy, but the worst-case scenario may not unfold.” Others are not so optimistic.

    In March, India and the United States unveiled a landmark agreement that would end India's status as a pariah for having snubbed the Nuclear Nonproliferation Treaty and acquired a nuclear arsenal. Under the agreement, India would be allowed to import civilian nuclear technology in exchange for submitting key facilities to international inspections. Even before the ink was dry, however, some U.S. nonproliferation analysts assailed the pact as a bad deal that would not make the world safer. Indian scientists and officials, meanwhile, hailed it as a triumph (Science, 10 March, p. 1356).

    Outside the wall.

    Nine institutes may have to sever ties to India's military nuclear establishment. Now many Indian nuclear scientists are having second thoughts. They fear that a key provision—cordoning off military facilities—will end scores of collaborations and maroon thousands of physicists in military labs. Civilian labs would suffer as well, they claim, under the scrutiny and paperwork demands of inspectors. Some fear that their research environment, already hampered by U.S. sanctions and prohibitions on nuclear imports, will deteriorate further. “The India-United States deal will destroy nuclear research in India,” fumes Padmanabha Krishnagopala Iyengar, a nuclear physicist and former chair of the Indian Atomic Energy Commission. “How can one subject any scientific creativity to safeguards and inspections?”

    These concerns could undermine negotiations aimed at tweaking the agreement to make it more palatable to the U.S. Congress, which must amend laws for the deal to go through. A Senate hearing on 26 April challenged the deal; Democrat Joseph Biden of Delaware demanded “a full list of India's civilian facilities.” Already, India has spurned a U.S. request to forswear nuclear tests. Indian negotiators, contacted by Science, say they will raise the possibility of continued scientific collaborations between BARC and TIFR—possibly in negotiations with the International Atomic Energy Agency (IAEA) in Vienna, Austria, and during upcoming talks with the United States on a special Indian inspection regime.

    Inspections “are extremely intrusive, immensely disruptive, and are often conducted in an atmosphere vitiated by suspicion,” contends Iyengar, who served on IAEA's board of governors. India has listed nine of the Department of Atomic Energy's two dozen or so research facilities as civilian (see table, below). “Any or all research [at these labs] may come under scrutiny,” Iyengar says. IAEA declined to comment on the ongoing negotiations.

    Nuclear researchers also object to U.S. demands that India erect firewalls to prevent skilled scientists from serving in both civilian and military labs. Annaswamy Narayana Prasad, a mechanical engineer and former BARC director, says that “effective separation of the two components is a near impossibility.” The proposal would require the Department of Atomic Energy to label each of its 65,000 staff members as civilian or military personnel, possibly freeze movement between the sectors by 2014, and perhaps end collaborations. Says Avinash Khare, a theoretical physicist at the Institute of Physics in Bhubaneswar, this “would be a real calamity.”

    India also may have to speed up work on new training and research facilities. Already, India has agreed to decommission two of three research reactors at BARC by 2010. The remaining reactor, the 100-megawatt Dhruva, would be overwhelmed by current users.

    The bottleneck could be solved if a new reactor is built on a civilian site—but Indian officials say they may choose to build it at BARC, a military center. According to Anil Kakodkar, a mechanical engineer and chair of the Atomic Energy Commission, the government has not yet decided whether to site the proposed $100 million, 30-megawatt multipurpose reactor on BARC's main campus or at a satellite facility in Vizag, which might enable classification as a civilian reactor. It could come online as early as 2014. Srikumar Banerjee, a materials scientist and director of BARC, says plans are under way to ensure that a small research reactor at BARC, Apsara, is not shut down but refurbished using Indian nuclear fuel.

    With the nuclear deal under fire in both India and the United States, its fate for now seems uncertain. To a growing number of Indian scientists, that would not be the end of the world.

  10. NEUROSCIENCE

    The Map in the Brain: Grid Cells May Help Us Navigate

    1. Karen Heyman*
    1. Karen Heyman is a writer in Santa Monica, California.

    A newfound class of neurons enables the brain to perform complex spatial navigation—and may even help form memories

    On the grid.

    As rodents explore an environment, neurons called grid cells fire in a regular geometric pattern such as this one.

    CREDIT: BJARNE RØSJØ/FAKTOTUM INFORMASJON, NORWAY

    If you're in an unfamiliar city and trying to locate the convention center for yet another conference, a map prepared by the local Visitor's Bureau can come in handy. It's likely to incorporate a navigational aid: an overlaid square grid, often with rows labeled with letters of the alphabet and columns labeled with numbers. An index might tell you that your hotel is in the A2 square, and the convention center can be found in Q22. Last November, scientists who successfully reached the new Washington, D.C., convention center heard Norwegian neuroscientist Edvard Moser address the Society for Neuroscience's annual meeting. In an invited talk, he told the audience that rodents, and presumably people, have their own versions of such navigational grids embedded in their brains. “I swear my jaw just dropped,” recalls computational neuroscientist David Redish of the University of Minnesota, Minneapolis. “It's amazing to see the interaction between theory and experiment that has come together in the Mosers' work.” James Knierim, a neuroscientist at the University of Texas Medical School in Houston, was equally impressed by the talk, declaring, “Moser's work is the most important discovery in our field in over 20 years.”

    The jaw-dropping discovery, announced 3 months earlier in a paper in Nature, centers on a brain region called the entorhinal cortex. Moser and his colleagues, including his wife May-Britt Moser and postdoctoral researchers Torkel Hafting and Marianne Fyhn, let rats roam freely in large enclosures while recording the firing of individual neurons from this cortical region. The neurons fired at distinct places: If plotted on a map of the enclosure, the firing locations of each neuron formed a triangular grid. “It's a lattice that's repeated over and over again,” says Edvard Moser. Consequently, he and his team at the Norwegian University of Science and Technology in Trondheim have dubbed these previously unknown neurons “grid cells.”

    Fired up.

    Edvard and May-Britt Moser analyze the neuronal activity of brain regions involved in spatial navigation.

    CREDIT: F. SARGOLINI ET AL., SCIENCE

    Notably, the grid is self-generated. The cells fire almost as if there were invisible, overlapping grids painted on the enclosure's floor, with each individual neuron laying out its own “virtual” grid—and the scale of the grids can vary in size from neuron to neuron. “Somehow the system, in this totally abstract way, is saying it's time for this cell to fire,” explains Patricia Sharp of Bowling Green State University in Ohio. “You just can't explain it through external input; … there's no such pattern in the animal's world.” Indeed, the rat doesn't need visual cues once it has been in an enclosure; a grid cell still fires in the same pattern if the animal roams in the dark.

    Even Moser and his team doubted their results at first. “We didn't really believe it,” he says. “We had to do some additional analysis to make sure this was biological.” On page 758, the Mosers and their colleagues add to the unfolding story, describing neurons in the entorhinal cortex that encode not only where a rat is, but also how fast it is moving and in what direction. Theoreticians believe that grid cells and their connections to other neurons may finally clarify ideas about how the brain performs spatial navigation. “We're over the moon about this discovery,” says John O'Keefe of University College London (UCL).

    Everything in its place

    In the 1970s, O'Keefe and his UCL colleagues discovered “place cells” in the hippocampus of the rat. On the simplest level, these neurons fire in response to where an animal is in space. Similar to the Mosers' new work, place cell experiments often feature a rat, with multiple electrodes implanted in its hippocampus, freely moving within an experimental environment; a given place cell only fires when the rat walks in a particular area, dubbed the cell's “place field.” Transfer the rodent to a different environment, and some of the same place cells are used to create a new map of the surroundings, a process called “hippocampal remapping.” This allows place cells to store representations of many different environments.

    After O'Keefe's initial discovery, he and colleague Lynn Nadel felt that place cells must form the basis of a “cognitive map” in the hippocampus, but they realized it would take more than just place cells for an animal to navigate its world. Says O'Keefe, “We predicted right from the beginning that there would have to be information about directions and distances to tie together the place cells into something like a map formation.”

    In the 1980s, James Ranck of the State University of New York Downstate Medical Center in Brooklyn identified another key group of navigational neurons: head direction cells, which have connections projecting into the entorhinal cortex. Depending on which way the animal is pointing its head, different groups of these cells fire, letting the animal know which way it faces.

    Still, the brain needs more to navigate through a complex world. “You have information about place cells—it tells you where you are located—and you have information from head direction cells that tells you what direction you're facing, but how do you then use all that information to update over time your path through an environment?” asks Jeffrey Taube, who was a postdoc with Ranck and continues to study head direction cells at Dartmouth College.

    That process of real-time updating is known as “dead reckoning” or “path integration,” and it is now thought that grid cells might be the key to how it works. These neurons' discovery came about through follow-up experiments on place cells. As scientists probed more deeply into the hippocampus, they found place cells with larger and larger place fields. But it's hard to gauge the extent of a large place field, because one can't track neuronal activity in rats running in the wild. In the standard, small experimental enclosure, “the cells either don't fire, because you're not in the place field, or they fire everywhere, because the place field is huge,” explains the Mosers' collaborator Bruce McNaughton of the University of Arizona, Tucson.

    In order to better understand larger place fields in the hippocampus, the Mosers had built an enclosure about twice the standard size. After they published a paper in Science in 2004 that reported regularly structured peaks in the firing patterns of entorhinal cortical cells, they conducted follow-up experiments in the larger enclosure. “It seemed almost silly not to do it,” says Edvard Moser. Once they were able to analyze more entorhinal neurons firing over a larger territory, the group realized they were looking at grids.

    How do grid cells help rats, and presumably people, find their way? “The fundamental duty of the grid cells is to provide the coordinate system, with which the place cells can then do their association of objects,” says Redish.

    Consider the find-the-college-library problem. For most schools, you can get a map that lays the campus out on a grid. At Princeton, the Gothic Firestone Library is at grid square F1. At the University of California, San Diego, the spaceship-like Geisel Library is at grid square E7. The underlying map grid is the same; only the identity and position of individual buildings differ. The hippocampus notes the landmarks (Gothic buildings = Princeton) and maps them onto the entorhinal grid. “The grid cells in the entorhinal cortex provide a coordinate system, but the hippocampus must use those inputs to create a map of the environment,” says Terry Sejnowski of the Salk Institute in San Diego, California.

    What lies beneath

    That still begs the question of how the brain generates its grids. The leading theories, proposed by several groups, among them McNaughton and his collaborators, and David Touretzky and Mark Fuhs of Carnegie Mellon University in Pittsburgh, Pennsylvania, have to do with “attractor networks,” which can be conceptualized as a sheet of neurons packed closely together like marbles on the surface of a table. In the grid cell attractor network, each neuron in the sheet is a different grid cell. Grid cells adjacent to one another in this sheet are not necessarily next to each other in the brain, but they represent locations that are next to each other in the real world. Thus, if one grid cell fires at a certain location, then the neurons surrounding it in the sheet will fire at nearby locations.

    When the rat stands at one location, the grid cell that represents that position will be very active along with neighboring cells in the network. All this activity forms a “bump” on the sheet. Excitatory connections between each cell and itself, and between it and its nearest neighbors, make the bump self-sustaining. The activated neurons also excite a bunch of inhibitory cells that prevent distant neurons from firing, thus the surrounding flatness. As the rat walks through space, the bump moves along the grid to keep track of its location, like Bugs Bunny burrowing in an old cartoon.

    “In order for this network to work, the bump has to move with the rat exactly, so if the rat moves 3 feet in real space, the bump has to move however many cells in the brain correspond to 3 feet in real space—and that's not a trivial problem,” says Hugh Blair of the University of California, Los Angeles. “You would need neurons in the attractor network that would encode the speed and direction in which the rat is moving.”

    Rat with hat.

    Researchers can use a cap of electrodes (upper right) to monitor neuronal activity as a rodent explores a large enclosure (above). The firing pattern of an individual “grid cell” (bottom right) helps the animal find its way, even in the dark.

    CREDITS: EDVARD MOSER/CENTRE FOR THE BIOLOGY OF MEMORY, NORWEGIAN UNIVERSITY OF SCIENCE AND TECHNOLOGY

    And that is what the Mosers, postdoc Francesca Sargolini, and their collaborators report finding in the new Science paper. The work reveals “conjunctive cells,” a class of grid cells that give exactly the path integration information—speed of movement and direction—needed to move the bump.

    The entorhinal cortex is “a complicated network consisting of four layers of principal cells … with different morphologies and different interconnections,” notes Edvard Moser. As they probed deeper into these layers with electrodes, they found cells with the properties of both head direction cells and grid cells. Additionally, they found other cells whose firing rate expressed the speed of the animal. “You have cells that express position, direction, and speed,” says Moser. “That's what you need to really tell you where you are at any given time as you're walking around. It's the conjunction of those properties that I believe—and I say ‘believe’ because we haven't shown it—could be fed up to the attractor network of the pure grid cells.”

    Mapping the future

    Because of the connections between the entorhinal cortex and the hippocampus, O'Keefe, UCL's Neil Burgess, and others have begun to explore whether grid cells have something to do with the mysterious oscillatory firing of groups of hippocampal cells. The role of these 6-to-10-Hz “theta” oscillations has fascinated O'Keefe for years. He and Burgess have recently suggested that interference patterns formed by theta waves with slightly differing frequencies guide the firing of grid cells and thus the creation of the grid.

    Grid cells could have an even more fundamental purpose than navigation, according to Blair. “Grid cells may be giving us our first real glimpse of the building blocks of hippocampal-dependent memories,” he suggests. In collaboration with Kechen Zhang of Johns Hopkins University in Baltimore, Maryland, Blair's lab is exploring whether grid cells can be used as fundamental components for assembling representations of two-dimensional objects. Blair suggests that grid cells could also construct mental representations of more complex objects, such as visual images of faces and scenes.

    “The really remarkable thing is that when you build a two-dimensional memory representation out of grid fields, there is a simple trick you can do to make the memory ‘scale-invariant,'” says Blair. “For example, when you meet a new person, you don't have to store a separate memory of what they look like from close up versus far away. Our work suggests that if you build your memory representation of the person's face out of grid fields, then you can easily represent their face at all possible sizes.” If he's right, grid cells may underlie both how you can find the convention center and how you can find a colleague in the crowd there.

  11. PHYSICS

    Tipping the Scales--Just Barely

    1. Robert F. Service

    Researchers are making big strides in a race to build nano-sized devices capable of weighing a single proton

    Next to Kate Moss, Michael Roukes may be the most obsessive weight watcher on Earth. Roukes, a physicist at the California Institute of Technology (Caltech) in Pasadena, isn't particularly worried about putting on or shedding a kilogram. He's thinking much lighter than that. In the 4 April issue of Nano Letters, Roukes and colleagues report making the most sensitive mechanical scale ever, capable of registering 0.000000000000000000007 grams, or 7 zeptograms.

    Even that achievement—weighing the equivalent of 30 xenon atoms—isn't enough to satisfy him, however. Within 2 years, Roukes and colleagues hope to register the weight of individual hydrogen atoms, a mere 1.66 yoctograms (1.66˙10−24 grams). For perspective, a yoctogram is to a gram as a gram is to the mass of the continental crust under Europe. Top that, Kate.

    Roukes's team isn't alone in its effort. Numerous groups around the globe are pushing the boundaries of ultrasensitive mechanical mass detectors. Most researchers, including Roukes, are seeking practical applications such as nanoscale versions of mass spectrometers, large and ubiquitous machines used for weighing molecules. A few, such as the Caltech group, are also hoping that their scales are sensitive enough to pick up individual protons. “It's within reach,” says Rashid Bashir, a ultrasensitive mass detection expert at Purdue University in West Lafayette, Indiana. “But it won't be easy.”

    The race to zeptogram sensitivity began heating up in 2003, when a pair of researchers at Oak Ridge National Laboratory in Tennessee reported that they had created nanoscale devices capable of registering organic compounds with a mass of 5.5 femtograms (10−15 grams). Harold Craighead and colleagues at Cornell University leaped past that figure in April 2004 with a report that they could detect mass changes down to the attogram level (10−18). Kamil Ekinci, a former postdoc of Roukes who now runs his own lab at Boston University, patented an attogram detector with his mentor back in 2001, although their results didn't hit the scientific literature until May 2004. Now, with their Nano Letters paper, Roukes and Ekinci have taken the sensitivity to nearly the single-zeptogram scale.

    There are many ways to measure mass. Most mechanical scales are made from tiny wires of silicon or other sturdy materials suspended over a surface. The scales are anchored either on one end, like a diving board, or at both ends, like a bridge. Researchers trigger oscillations in the wires and track how the frequency of those oscillations changes when a tiny speck of mass is added.

    Lightweights.

    Nanoscale sensors made by teams at Caltech (above) and Boston University (top) register tiny weights by creating oscillations in wirelike slivers of material and then gauging how those oscillations change when a tiny collection of atoms or molecules is sprayed on top.

    CREDITS (TOP TO BOTTOM): K. EKINCI/BOSTON UNIVERSITY; X. L. FENG AND M. L. ROUKES/CALTECH

    In their paper, Roukes, Ekinci, and Caltech colleagues describe how they constructed tiny bridges from silicon carbide and connected a wire to each end of the bridge. They then placed their device in a vacuum chamber and within a large magnetic field, passing a current through the silicon carbide bridge from one wire to the other. The motion of the electrical charges in the magnetic field exerted a sideways force on the bridge, essentially plucking it like a guitar string and causing it to oscillate up and down.

    The team then used a specially designed electrical feedback loop to make it vibrate at a steady frequency of either 133 or 190 megahertz. This motion created a steady pattern of voltage changes between the wires at either end of the bridge. The researchers then sprayed xenon atoms or nitrogen molecules through a specially designed shutter in the vacuum chamber. When the extra atoms landed on the bridge, the added mass slowed the bridge's vibrations, causing a change in the pattern of voltage readouts.

    Ekinci says that since 2002 the team has improved the sensitivity of its apparatus 1000-fold. But he says it will take another such jump to detect individual hydrogen atoms. “This approach has very good potential to go to higher sensitivity,” Ekinci says. To succeed, however, Ekinci and Roukes will need to make slightly smaller, more responsive bridges, get them to oscillate at a slightly higher frequency, and tweak the feedback circuitry to improve detection. Each advance has already been demonstrated independently; now Roukes's current team is working on putting them all together.

    Other groups are also hard at work. One, led by Andrew Cleland at the University of California, Santa Barbara, for example, is using thermal energy to push an oscillating beam right up to a critical point, at which a tiny amount of added mass will push it over the threshold and cause the beam to oscillate at a markedly different frequency. But Cleland concedes that Roukes's team is the current leader.

    The ability to weigh individual hydrogen atoms is expected to stimulate interest in nanomechanical mass sensors. Much of the payoff will likely come in differentiating the weights of different biomolecules, such as proteins. Such an advance could lead to nanoscale mass spectrometers capable of weighing individual molecules.

    Current mass spectrometers, by contrast, start with millions of molecules and compute their average weight. By looking at single molecules, researchers could detect minute changes, such as proteins with a very slight sequence change or ones labeled with different isotopes. They could also track the weights of neutrally charged molecules, a feat mass spectrometers cannot perform because they use the charge of different molecules to propel them through the mass detector. By using the lithographic tools of the electronics industry, researchers also could make huge arrays of nanoscale mass spectrometers that look at vast numbers of biomolecules simultaneously.

    Efforts to use nanomechanical sensors to weigh biomolecules are well under way. Craighead's group at Cornell, for example, reported in 2001 that it was able to detect single cells with attogram-scale masses. More recently, researchers have tracked individual viruses and DNA molecules. “But it will take a bit more time” to make practical devices, Ekinci says.

    One challenge is that biomolecules exist in watery environments rather than in vacuums. But operating nanoelectromechanical systems devices in water causes the molecules to stick to their cantilevers and change their motion. Numerous teams are working on specialized coatings to fend off nontarget molecules or latch onto targeted ones. Their success promises to open new windows into the biochemistry of individual cells.

  12. ASTRONOMY

    Rising From the Ashes

    1. Dennis Normile

    Three years after a devastating fire, Australia's Mount Stromlo Observatory is well on the road to a Phoenix-like recovery

    CREDITS: D. NORMILE; (INSET) PHOTO BY DANIEL BEREHULAK/GETTY IMAGES

    MOUNT STROMLO, CANBERRA—The loss didn't sink in until Rachel Campbell saw the charred remains for herself. Then a Ph.D. candidate at the Research School of Astronomy and Astrophysics (RSAA) of Australia National University (ANU), Campbell had been wrapping up a search for large objects beyond Neptune that, she hoped, would challenge conventional notions of planet format ion. When wildfires swept over Mount Stromlo on 18 January 2003, she heard that RSAA's mountaintop observatory had been “damaged.” Only when the staff was allowed to visit several days later did Campbell discover that the 50-inch Great Melbourne Telescope she had been using, along with her computer with 3 years' worth of analyzed data, had been “totally destroyed.”

    “I felt a lot of confusion,” she recalls. “I knew that the project was ended but didn't know what would happen to me or my thesis.”

    Confusion quickly turned to gritty resolve. Three weeks after the fire, Campbell was ensconced in a lab at the University of Pennsylvania, reconstructing her unique software and preparing to process backup copies of raw image data. Finishing her degree took a year longer than expected. “But I got to reanalyze the data and did a better job the second time around,” she says. She found three new candidate objects and confirmed several others. Besides a Ph.D., her findings led to several invited talks and a research position at Macquarie University in Sydney. The fire was a shock, she says, “but more positives than negatives came out of it.”

    Campbell was not alone in her determination to turn adversity into advantage. Using salvaged computers, temporary offices, and borrowed observing time, RSAA researchers have been so productive over the last 3 years you might never suspect that a fire had gutted some of their key facilities. They have described the oldest star yet found, finished releasing data from the largest-ever galaxy survey, and published on topics including gamma ray bursts and the universe's expansion. Even with their workshops reduced to cinders, RSAA's instrument team delivered a major device to the Gemini North 8-meter telescope on Mauna Kea, Hawaii, last fall and will ship another to the Gemini South telescope in Chile by June. Charles Alcock, head of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, says the group's resilience “is revealing of how strong they are intellectually.”

    Fortunately, RSAA did not lose all of the Mount Stromlo Observatory, a leading astronomy center that over its 82-year history has been known for achievements such as deciphering the nature of the solar corona and gathering some of the first clues to the chemical makeup of the universe beyond the Milky Way. Most staff offices and computing facilities survived, as did newer telescopes at Siding Spring Observatory, about 450 kilometers north of Mount Stromlo. The surviving facilities helped the astronomers get right back to research, and “they were immediately talking about what to rebuild, what to let go of, and what to do better,” says John Tonry, an astronomer at the University of Hawaii, Manoa. As a result, RSAA is now completing the first phase of facility reconstruction, with larger and better equipped workshops opening this summer and a new, advanced telescope set to see first light in late 2006 or early 2007.

    “The staff were determined that this was just going to have as little impact as possible,” says Penny Sackett, director of RSAA and the Mount Stromlo Observatory. Douglas Simons, director of the Gemini Observatory in Hilo, Hawaii, who visited Canberra days after the fire, says he is “incredibly impressed” with how Sackett's group “pulled through that gigantic mess.”

    Annihilation

    Bushfires regularly set Australia's dry interior ablaze. But the fires of January 2003 were unusually intense. Touched off by lightning strikes in the grasslands west of Canberra, the flames advanced slowly toward Mount Stromlo over 2 weeks. On Friday, 17 January, fire modelers assured RSAA officials that Mount Stromlo was safe for the time being. But the next morning, the fires surged, gaining 10 kilometers in 15 minutes, by one estimate. By noon, authorities ordered people off the mountain. By dusk, Mount Stromlo lay in ruins.

    Sackett made it to the top of Mount Stromlo on Sunday afternoon, 19 January. There, she says, she found “a scene of total devastation.” The losses included:

    • A 74-inch telescope fitted with an advanced spectrometer. It was being used to search for ancient stars and to determine the chemical compositions of stars, both essential to developing models of the early universe.

    • The 50-inch Great Melbourne Telescope, once the world's largest. Built in 1868, it had been extensively upgraded and had been a key facility in the hunt for dark matter as part of the Massive Compact Halo Objects (MACHO) project.

    • The instrument workshops, where the Near-Infrared Integral-Field Spectrograph (NIFS), a tool for studying black holes, was undergoing final testing before shipment to the Gemini North telescope.

    • The historic 1924 Commonwealth Solar Observatory building, which housed the library, including irreplaceable monographs.

    • Three smaller telescopes used for public viewing and a half-dozen observatory houses.

    Fortunately, an administration complex and the bulk of Mount Stromlo's computing facilities were spared.

    Reeking of smoke, Sackett drove straight from the mountain to the ANU campus in Canberra, where vice chancellor Ian Chubb had marshaled two dozen university officials to start planning Mount Stromlo's recovery.

    Although Mount Stromlo's offices were intact, all utilities were down. Within 3 days, ANU's computing staff had a copy of the Mount Stromlo server running on campus; 80 astronomers and grad students occupied a computer center for 3 weeks. That gave ANU technicians time to set up generators and a microwave link on the mountain. The staff felt “we can do without toilets, we can bring our own water, but we have to have Internet,” says Sackett.

    The return to Mount Stromlo was not so pleasant. “It was pretty stinky” from the smoke, says Ph.D. candidate Anna Frebel. Graduate students say they felt under particular pressure to get their research back on track. Offers of observing time, access to specialized computers, and support to attend conferences poured in from around the world. “It really was wonderful how the astronomical community helped us out,” says Campbell.

    Back on track.

    While Stromlo director Penny Sackett (above) solved institutional issues, grad students such as Anna Frebel (inset) took care of the science.

    CREDITS: D. NORMILE

    Another student grateful for the community's generosity is Gayandhi de Silva, who had been using the high-resolution spectrograph on the 74-inch Mount Stromlo telescope to chemically fingerprint stars. Stars with similar compositions are thought to have formed in the same galactic region; tracking their positions can give clues to how a particular galaxy has evolved. With several observatories offering observing time, de Silva says, “I was able to choose the one with the best capabilities for my work.” That was Princeton University's Apache Point Observatory in Sunspot, New Mexico. De Silva is now finishing her dissertation and is headed for a postdoc at the European Southern Observatory in Santiago, Chile.

    Frebel was lucky. Part of a team trawling for very old stars, which reflect conditions in the early universe, she was not using Mount Stromlo's telescopes before the fire. All she needed to resume work was a computer and Internet access. Still, she says, “I wanted to do my bit to get [Stromlo research] back on track.” Shortly after the fire, she spotted a candidate old star among survey data from a collaborating telescope. Her team took high-resolution spectra using the National Astronomical Observatory of Japan's 8-meter Subaru Telescope on Mauna Kea, Hawaii. It turned out to be the oldest star yet observed. Frebel was first author on the team's Nature paper and was featured in dozens of newspaper and magazine stories. She is also completing her thesis and has accepted a postdoc position at the University of Texas, Austin.

    As the grad students were showing their mettle, senior staff tackled institutional questions. “One of the big tragedies was losing NIFS,” says RSAA astronomer Brian Schmidt. Designed to provide the most detailed views ever of black holes and address questions about star formation, NIFS was an important instrument for the Gemini consortium. With other large telescopes bringing similar instruments on line in the coming years, there were worries that a delay would close NIFS's window of opportunity for making discoveries, says Simons, who was then Gemini's facilities director. RSAA had also just won a competition to build an Adaptive Optics Imager for Gemini South, an 8-meter telescope on Cerro Pachón in Chile.

    At the postinferno meeting in Chubb's office, Sackett recalls telling the vice chancellor that “we should rebuild NIFS, because it's a symbol, because people need something to do, and because it's science that we care about.”

    It was easier said than done. They didn't have a workshop. They would need to find about $3.5 million. And the Gemini consortium would have to agree. Still, just 3 weeks after the fire, when Simons arrived in Canberra, RSAA had a plan: Mount Stromlo staff would oversee work on a NIFS replacement using the facilities and staff of Auspace, a defense and aerospace contractor in Canberra. The university promised to fund the work immediately and settle with insurers later. With that can-do spirit and university support, Simons says, “it was a no-brainer” to let RSAA go forward with both NIFS and the Adaptive Optics Imager.

    Resurrection

    Sackett and her senior staff also had to contemplate Mount Stromlo's long-term future. The 50-inch Great Melbourne Telescope had been scheduled for another major upgrade. Instead, it was redesigned from the ground up, keeping the same size reflector but with a vastly enlarged field of view and a snazzy 300-million-pixel digital camera. Schmidt says the new scope, called SkyMapper, will be capable of surveying the entire southern sky in two nights, something that would “take a lifetime” with current telescopes. A primary target will be keeping an eye out for extremely rare nearby supernovas. “SkyMapper is really going to be quite revolutionary for [studying] supernovas,” predicts the University of Hawaii's Tonry.

    SkyMapper, expected to see first light in December or January, is being built at Siding Spring. A high-bandwidth optical fiber cable connection will allow control of the instrument from Mount Stromlo, where increasing light pollution from Canberra is degrading observing conditions.

    Inevitably, there have been stumbling blocks. Demolition and construction were delayed by the need to develop a master plan that considered the historical and ecological aspects of development, as Mount Stromlo is an Australian Heritage Site. And ANU has sued its insurers, alleging that they have so far paid only a fraction of what the university says it is entitled to. A shortage of funds is delaying restoration of the Commonwealth Solar Observatory and the replacement of the 74-inch telescope. “That has left a big hole in our observing program,” says Schmidt.

    But 3 years after the fire, RSAA has passed or is reaching a number of milestones. NIFS was delivered to Gemini North last fall, and the Adaptive Optics Imager for Gemini South is nearing completion. Once SkyMapper comes on line, it will largely be business as usual again for Mount Stromlo's astronomers. But their remarkable recovery suggests that the fire of 2003, if anything, has broadened their horizons.

Log in to view full text