News this Week

Science  04 Aug 2006:
Vol. 313, Issue 5787, pp. 598
  1. SCIENTIFIC PUBLISHING

    The Undisclosed Background of a Paper on a Depression Treatment

    1. Constance Holden

    Last month, a group of scientists published a review of research on vagus nerve stimulation (VNS), a controversial treatment for depression. But the article, published in the journal Neuropsychopharmacology, omitted an important detail: All the authors are paid advisers to the company that manufactures a device for VNS that was approved last year by the U.S. Food and Drug Administration.

    The episode has raised a stir at the American College of Neuropsychopharmacology (ACNP), publisher of the journal, which has promised an investigation as soon as lead author Charles B. Nemeroff—who is also editor-in-chief of the journal—returns from a vacation in South Africa. Nemeroff, chair of the psychiatry department at Emory University in Atlanta, Georgia, says he and his co-authors informed the journal about their ties to Cyberonics in Houston, Texas, manufacturer of the device. He says the failure to mention those ties in the article, as required by journal policy, was a simple “oversight.” A prominent depression researcher with his thumb in many commercial pies, Nemeroff heads the “mechanism of action” advisory board at Cyberonics.

    The paper, authored with seven other board members and a Cyberonics employee, reviews research on use of the Vagus Nerve Stimulator—an implanted device that sends impulses by wire to the vagus nerve in the neck—to treat severe depression. The article speculates on the mechanisms of action, which are as yet unclear.

    Some observers find this episode particularly troubling because not only is Nemeroff the journal's editor but also the first draft of the paper was prepared by a professional writer, hired by Cyberonics, who was not listed among the authors. (She was named in the acknowledgements.) The matter attracted considerable press coverage in late July, thanks in large part to the efforts of Bernard Carroll, former chair of psychiatry at Duke University and now at the Pacific Behavioral Research Foundation in Carmel, California. Last month, Carroll broadcast an e-mail to colleagues and the press accusing Nemeroff of running a “slick public relations disinformation campaign,” hiring a “ghostwriter,” and “incestuously” placing the article in his own journal.

    Mood booster.

    Implant sends pulse to vagus nerve, spurring release of brain chemicals.

    CREDIT: (BASE ANATOMY IMAGE) 3D CLINIC/CORBIS

    ACNP initially responded by acknowledging a “serious omission” and saying that corrections were being made online and in print. In Nemeroff's defense, it pointed out that he had recused himself from the journal's editorial process and said, “The charge the paper was ‘ghostwritten’ does not appear to be valid.” Last week, however, ACNP Executive Director Ronnie Wilkins said the ACNP council plans a “thorough investigation” of the matter, including a comparison of the original and final drafts of the manuscript.

    Nemeroff insists he is guilty of “an oversight and nothing more. … I provided all of the financial disclosure information to Neuropsychopharmacology, … but it was not all included in the printed version of the paper,” he said in an e-mail to Science. “These things are being hypermagnified beyond their importance,” says Alan Schatzberg, chair of psychiatry and behavioral sciences at Stanford University in Palo Alto, California, who calls Carroll's accusations “outlandish.”

    Others are more worked up. Psychiatrist Irwin Feinberg of the University of California, Davis, calls Nemeroff's actions “inexcusable.” And Drummond Rennie, an editor at the Journal of the American Medical Association, adds, “It is very bad scientific and ethical practice to have a nonauthor write the first draft.”

    Many acknowledge that, whatever the final outcome of the ACNP's investigation, appearances are everything. This has “sorely devalued” both the journal and ACNP, says psychiatrist Richard Wurtman of the Massachusetts Institute of Technology in Cambridge. “And I'm afraid this perception won't disappear for a long time.”

  2. PLANETARY SCIENCE

    At Last, Methane Lakes on Saturn's Icy Moon Titan--But No Seas

    1. Richard A. Kerr

    Team members poring over radar data returned on 21 July by the Cassini spacecraft believe they now have very strong evidence for methane lakes on Saturn's haze-shrouded moon Titan. That would make Titan the only other body in the solar system known—or ever likely to be known—to have standing bodies of liquid. The new radar images “looked quite convincing; I was impressed,” says planetary radar specialist Donald Campbell of Cornell University.

    When Cassini was launched, most planetary scientists expected that it would find Titan covered by seas of methane liquefied by the moon's −183°C temperatures. But those hydrocarbon seas were nowhere to be seen on Cassini's arrival, and Cassini's Huygens lander later bumped down on methane-damp water ice rather than splashing down in a frigid sea or even a puddle. Yet methane clouds, high methane humidity, and river-cut valleys in the icy surface spoke of methane cycling on Titan much the way water cycles through oceans, clouds, and rain on Earth.

    A likely lake.

    The black blotch in the center of this radar image returned by Cassini is so black that it is probably a pool of liquefied methane.

    CREDIT: NASA/JPL

    On its latest pass by Titan, Cassini's surface-scanning radar detected places that fail to reflect any detectable signal back to Cassini, says team member Jonathan Lunine of the University of Arizona, Tucson. That is just the way a radar beam striking a smooth, liquid surface would behave; the signal would just bounce away from Cassini into space.

    These “radar-black” areas ranging from 1 or 2 kilometers to 30 kilometers in size meet other expectations of lakes, says Lunine. The putative shorelines are sharp boundaries between radar-black and radar-gray, and some possible tributaries are radar-black as well. And the lakelike features reside poleward of 70°N, where weather models predict that there should be substantial rain in northern Titan winter to fill them. Equatorward of 70°N, where it shouldn't be raining much, the same sort of topography has no lakelike features.

    Team members think they have finally found the last link in the methane cycle of Titan. To be absolutely certain, the Cassini team decided to scan some of the same “lakes” during an October flyby from a slightly different angle. If there's wind on the lakes, Cassini radar may catch signs of waves.

  3. EARTH SCIENCE

    China Grapples With Seismic Risk in Its Northern Heartland

    1. Richard Stone

    BEIJING—Last week, ceremonies marked the 30th anniversary of the deadliest earthquake in 400 years: the Tangshan earthquake, which killed at least 244,000 people in China's Hebei Province. But as the 1976 cataclysm fades into history, a scientific puzzle endures: The magnitude-7.5 temblor struck along a fault that no one knew existed.

    Tangshan is a striking example of the enigmatic tectonics of northern China—a problem now getting long overdue attention. In one project, China is installing a seismometer array across the North China Plain, and this fall the Natural Sciences Foundation of China (NSFC) is expected to unveil a $20 million, 5-year initiative to probe the region's geology and seismic risk. The twin efforts, described here last week at the 2006 Western Pacific Geophysics Meeting, “will treat north China as a natural laboratory for studying earthquakes,” says geophysicist Zhou Shiyong of Peking University. He says that the Chinese Earthquake Administration (CEA), tasked with quake prediction, views north China as an analog of the Parkfield segment of the San Andreas fault in California, the most closely observed earthquake zone in the world.

    The stakes are high: North China is “the political, economic, and scientific center of China,” Zhou says. Seismic risk is of paramount political concern. Moments after a magnitude-5.1 quake rattled Beijing on 4 July, Premier Wen Jiabao phoned the CEA director to ask for a detailed report, says Mian Liu, a geophysicist at the University of Missouri, Columbia.

    Most puzzling is why north China is a seismic hotbed. Much of the region sits atop a craton, a chunk of ancient crust that, by definition, should shake little. But some 150 million years ago, the North China Craton became restive. “No other cratons behave like this. North China is unique,” says Liu. Intraplate faults themselves are not novel: A prime example is the New Madrid fault in Missouri. But deformation in north China is an order of magnitude larger than that of New Madrid, Liu says. The result has been a parade of devastating quakes in northern China, including Tangshan and the worst in history: the Shaanxi earthquake in 1556, which claimed some 830,000 lives.

    Solemn vigil.

    Ruins from the Tangshan earthquake.

    CREDIT: CHINA PHOTOS/GETTY IMAGES

    To help shed light on this tectonic anomaly, the Deep Earth Lightening project—involving CEA, the Chinese Academy of Sciences, and NSFC—is wiring north China with nearly 800 portable broadband seismometers. The array will image the crust and upper mantle to map faults and explore craton evolution and dynamics. The initiative is setting up a data center modeled after the Incorporated Research Institutions for Seismology (IRIS) consortium in Washington, D.C., says Chen Yong of CEA's Institute of Earthquake Science in Beijing.

    Complementing that effort is NSFC's Great North China Initiative. GNCI, a top priority in NSFC's strategic plan, evolved from discussions between top Chinese geoscientists and a U.S.-based association of Chinese expatriates, the International Professionals for the Advancement of Chinese Earth Sciences.

    Both initiatives mark a tectonic shift of a different sort. In the past, Western scientists often hired Chinese counterparts as guides or assistants. Now the Chinese are calling the shots. “This is a fundamental change in collaborating with the Chinese scientific community,” says Liu. U.S. scientists, he says, “should take advantage of this new funding environment.”

    Either way, the initiatives should be revelatory, says Liang Xiaofeng of Peking University: “There are too many unknown structures beneath our feet.”

  4. INTERNATIONAL SCIENCE

    Singapore-Hopkins Partnership Ends in a Volley of Fault-Finding

    1. Dennis Normile

    Singapore's government and Johns Hopkins University in Baltimore, Maryland, are shutting down a joint research and education program that Singapore has funded for 8 years at a cost of more than $50 million. As news of the closure leaked last week, the partners blamed each other for failing to achieve goals on recruiting faculty, enrolling students, and transferring technology to local industry, among other issues.

    The collaborative effort, supported by Singapore's Agency for Science, Technology, and Research (A*STAR), has a troubled history. According to a chronology of events released by A*STAR, Singapore and Johns Hopkins agreed to set up an international clinic as well as an educational and research scheme in 1998. Although the clinic proved successful, the joint educational and research efforts were struggling; in February 2004, they were restructured into the Division of Biomedical Sciences Johns Hopkins in Singapore (DJHS), reporting to the dean of medicine at Johns Hopkins's Baltimore campus.

    Early this year, A*STAR concluded that DJHS was still failing to meet goals. After unsuccessful negotiations between the agency and the university, on 20 June, the director of A*STAR's Biomedical Research Council, Andre Wan, and DJHS Director Ian McNiece jointly sent a private notice to all staff and faculty members that DJHS was being “wound down.”

    Trouble in Biopolis.

    Singapore is pulling the plug on a training and research program cosponsored by Johns Hopkins University.

    CREDIT: MUNSHI AHMED PHOTOGRAPHY

    Few knew about it until the news appeared in Singapore's The Straits Times newspaper on 22 July. The article quoted an unnamed Johns Hopkins University spokesperson as saying the university had done its part to recruit faculty members and graduate students and accusing A*STAR of failing to meet its financial and educational obligations. According to the newspaper, the Johns Hopkins spokesperson called it a “reputational issue for Singapore and A*STAR.”

    A*STAR responded with a barrage of letters to editors and public statements, including charts giving the history of the collaboration and snippets of e-mails between agency and university officials. Among other things, A*STAR claims DJHS failed to meet eight of 13 agreed “key performance indicators.” DJHS was supposed to attract 12 senior investigators with an international standing to the faculty in the first 2 years of the new agreement. A*STAR alleges that there is only one faculty member meeting those criteria; others are entry-level academics or do not reside full-time in Singapore. A*STAR also claims that by February this year, DJHS was to have enrolled eight Ph.D. students but had none. Wan says A*STAR responded strongly “to make it clear that Singapore had lived up to our financial obligations and had been more than generous with support and opportunities for the project to succeed.”

    DJHS directed queries to Johns Hopkins's Media Relations office in Baltimore, which, as Science went to press, had not replied to requests for an interview or to e-mailed questions.

    Wan describes the joint effort as “an experiment that didn't give us the results hoped for.” But he notes that Singapore is collaborating with other universities. In July, a separate agency announced plans for a new joint research center with the Massachusetts Institute of Technology. “We have multiple avenues for young Singaporeans to pursue scientific training,” Wan says.

  5. NEUROSCIENCE

    The Emotional Brain Weighs Its Options

    1. Greg Miller

    Faced with a decision between two packages of ground beef, one labeled “80% lean,” the other “20% fat,” which would you choose? The meat is exactly the same, but most people would pick “80% lean.” The language used to describe options often influences what people choose, a phenomenon behavioral economists call the framing effect. Some researchers have suggested that this effect results from unconscious emotional reactions.

    Choice meat.

    The brain's emotional areas react to the language describing a choice—80% lean, for example.

    CREDIT: K. BUCKHEIT/SCIENCE

    Now a team of cognitive neuroscientists reports findings on page 684 that link the framing effect to neural activity in a key emotion center in the human brain, the amygdala. They also identify another region, the orbital and medial prefrontal cortex (OMPFC), that may moderate the influence of emotion on decisions: The more activity subjects had in this area, the less susceptible they were to the framing effect. “The results could hardly be more elegant,” says Daniel Kahneman, an economist at Princeton University who pioneered research on the framing effect 25 years ago (Science, 30 January 1981, p. 453).

    In the new study, a team led by Benedetto De Martino and Raymond Dolan of University College London used functional magnetic resonance imaging (fMRI) to monitor the brain activity of 20 people engaged in a financial decision-making task. At the beginning of each round, subjects inside the fMRI machine saw a screen indicating how much money was at stake in that round: £50, for example. The next screen offered two choices. One option was a sure thing, such as “Keep £20” or “Lose £30.” The other option was an all-or-nothing gamble. The odds of winning—shown to the subjects as a pie chart—were rigged to provide the same average return as the sure option. In interviews after the experiment, participants said they'd quickly realized that the sure and gamble options were equivalent, and most said that they had split their responses 50-50 between the two choices.

    But they hadn't. When the sure option was framed as a gain (as in “Keep £20”), subjects played it safe, gambling only 43% of the time on average. If it was framed as a loss, however, they gambled 62% of the time.

    When the researchers examined the fMRI scans, the amygdala stood out. This brain region fired up when subjects either chose to keep a sure gain or elected to gamble in the face of a certain loss. It grew quiet when subjects gambled instead of taking a sure gain or took a sure loss instead of gambling. De Martino suggests that the amygdala activity represents an emotional signal that pushes subjects to keep sure money and gamble instead of taking a loss.

    De Martino says he expected to find that subjects with the most active amygdalas would be more likely to keep sure gains and gamble when faced with a certain loss. But no such correlation turned up. Instead, activity in OMPFC best predicted individuals' susceptibility to the framing effect. De Martino speculates that OMPFC integrates emotional signals from the amygdala with cognitive information, such as the knowledge that both options are equally good. “People who are more rational don't perceive emotion less, they just regulate it better,” he says.

    “It's a nice, strong correlation between individual differences in behavior and individual differences in the brain,” says Russell Polldrack, a neuroscientist at the University of California, Los Angeles. Yet Elizabeth Phelps, a cognitive neuroscientist at New York University, cautions that fMRI studies alone can rarely prove a brain region's causal role. She suggests examining people with damage to the amygdala or OMPFC to clarify how these regions contribute to the framing effect.

  6. AVIAN INFLUENZA

    Hybrid Viruses Fail to Spread

    1. Jocelyn Kaiser*
    1. With reporting by Martin Enserink.

    Experts have long worried that if a human influenza virus and the H5N1 avian influenza strain now circulating across much of the globe were to combine their genes, the result could be a disaster—a lethal virus that spreads easily among people. But scientists at the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia, who in controversial experiments have created such potential pandemic viruses in the lab, report this week that the hybrid viruses are, at least in ferrets, relatively benign.

    That doesn't mean the world can let down its guard against H5N1. “I'm cautious about using the word ‘reassuring,’” CDC Director Julie Gerberding told reporters, noting that the study looked only at simple combinations of one human flu virus and a single H5N1 strain. But it would have been bad news if a so-called reassortant had spread easily among ferrets, according to virologist Albert Osterhaus of Erasmus Medical Center in Rotterdam, the Netherlands. We can be a “little bit” relieved, he says.

    To Osterhaus, Gerberding, and others, the new study, published online this week in the Proceedings of the National Academy of Sciences, confirms the value of reassortant experiments, which some bioweapons experts have said should not proceed without more review, if at all (Science, 30 July 2004, p. 594). CDC says the work also establishes ferrets, which have a respiratory tract that makes them similar to humans in their susceptibility to the influenza virus, as an important new animal model in which to test how flu strains spread.

    Mixed up.

    CDC scientists tested hybrids of the H5N1 influenza virus (left) and a human flu virus.

    CREDIT: DR. GOPAL MURTI/VISUALS UNLIMITED

    Since 1997, the H5N1 virus has infected at least 232 people, mostly in Asia, 134 of whom have died. So far, the virus has not assumed a form that passes easily among humans. That could happen if H5N1 were to develop the needed mutations, as likely happened with the 1918 Spanish influenza. Or an H5N1 flu virus that has infected a person could swap some of its eight genes with those of a human flu virus—keeping its H5 hemagglutinin surface protein, to which people have no immunity. Known as reassortment, this process led to two milder flu pandemics in 1957 and 1968.

    So, CDC researchers and collaborators used a lab technique called reverse genetics to make combinations of H3N2, a seasonal human flu strain, and the 1997 strain of H5N1. To model the virus's spread in people, they housed ferrets inoculated with the reassortants in cages adjacent to healthy ferrets, which would allow the infected animals to pass on the virus through the air.

    The hybrids with H3N2 proteins on the outside and H5N1 internal proteins replicated well in cells but didn't transmit as easily in ferrets as H3N2 itself, the CDC team found. And the potentially most dangerous combinations—viruses containing genes for H5N1's surface proteins and for internal human virus proteins—not only didn't grow as well as H5N1 but also didn't spread at all between ferrets. Any reassortant would likely need more genetic changes, such as ones that made the 1957 and 1968 strains better able to bind to human respiratory tract epithelial cells, the CDC team suggests. “The picture is more complex” than just mixing avian and human flu genes, says co-author Jacqueline Katz of CDC.

    The CDC team also infected a ferret with an H5N1 outer/H3N2 inner hybrid, allowed the virus time to mutate, and then inoculated another ferret with virus from the first animal, repeating this cycle four more times. The virus still didn't become transmissible, however.

    CDC and other labs are now conducting reassortment studies with more recent H5N1 strains and more human flu strains. “I don't think we should underestimate the virus,” says flu researcher Yoshihiro Kawaoka of the University of Tokyo and the University of Wisconsin, Madison.

  7. PHYSICS

    High-Temperature Superconductors Feel the Vibe After All

    1. Adrian Cho

    Tiny vibrations could shake up physicists' understanding of high-temperature superconductors. In ordinary superconductors, quantized vibrations, or “phonons,” provide the glue that binds electrons in pairs, and they then zip through the material unimpeded. But for various reasons, most physicists believe phonons have little to do with high-temperature superconductors' ability to conduct electricity with zero resistance at temperatures up to 138 kelvin. Now, ultraprecise measurements may nudge researchers to reconsider that assumption.

    “I'm very happy that these results seem to be consistent with what we have been saying,” says Zhi-Xun Shen, an experimenter at Stanford University in Palo Alto, California, who has reported other evidence of electron-phonon interactions in high-temperature superconductors. However, some researchers argue that the vibrations seen in the new work are an experimental artifact.

    To spot the shaking, J. C. Séamus Davis of Cornell University and colleagues employed a fingerlike probe called a scanning tunneling microscope (STM). The STM's tip hovered above the surface of the superconductor, bismuth strontium calcium copper oxide, and electrons jumped between the two. At different points above the material, the team varied the voltage between tip and surface and recorded the rate at which the current changed. That revealed the number of channel-like quantum states into which the electrons could jump.

    As the researchers ramped up the voltage, they observed a deep dip in the number of states; this so-called superconducting gap arises when the electrons form pairs. On either side of the gap, they also saw wiggles, or “inflection points,” as they report this week in Nature. These were signs that something was interacting with the electrons. The researchers proved it was phonons by showing that the energy of the features decreased in samples in which they replaced the common isotope oxygen-16 with heavier oxygen-18. That's exactly what should happen if the wiggles were produced by vibrating oxygen atoms.

    Spot the signal.

    Do the observed vibrations come from an electron jumping straight to the copper oxide layer, or from it stopping in the strontium oxide layer en route?

    CREDIT: ADAPTED FROM S. PILGRAM ET AL., ARXIV: COND-MAT (2006)

    Ironically, changing isotopes does not change the maximum temperature at which a given high-temperature superconductor works. That's one reason many researchers have doubted that phonons cause the pairing of electrons within them.

    Others had observed electron-phonon interactions by, for example, using x-rays to measure the speed of the electrons, which decreases above a certain energy as phonons essentially drag on the electrons. But Davis and colleagues also show a correlation between the energy of the phonon and the width of the superconducting gap, both of which vary almost from atom to atom across the surface. That suggests that the electronphonon interaction isn't an extraneous detail but actually affects the superconductivity, Davis says. The phonons alone still may not provide the glue for pairing, he says, but “more and more people are concluding that you can't ignore these lattice vibrations.”

    Some researchers are skeptical, however. Davis's team believes the shaking atoms lie in the planes of copper and oxygen in the layered superconductor along which the paired electrons glide. But to reach such a plane, an electron from the STM might first hop onto an oxygen in the layer of atoms above and set it vibrating, notes Manfred Sigrist, a theorist at the Swiss Federal Institute of Technology in Zurich. That could generate a very similar signal, Sigrist says. But Thomas Devereaux, a theorist at the University of Waterloo in Canada, says if that effect were so strong, it should have shown up in other STM experiments, too.

    In spite of the skepticism, it's clear that phonons are now a hot topic in high-temperature superconductivity.

  8. PHYSICS

    Physicists See Solid Helium Flow, But Not in the Most Exciting Way

    1. Adrian Cho

    If seeing is believing, then a new experiment dispels all doubt that, bizarrely, solid helium can flow like the thinnest conceivable liquid. Physicists have literally watched solid helium flow when tugged on by gravity, they report online in Science this week (www.sciencemag.org/cgi/content/abstract/1130879). The direct observation confirms less conclusive evidence of “supersolidity” from 2 years ago. Ironically, it may also undermine the most tantalizing explanation of the phenomenon.

    In 2004, Eunseong Kim, now of the Korea Advanced Institute of Science and Technology in Daejeon, and Moses Chan of Pennsylvania State University in State College reported the first evidence of supersolidity, and others confirmed their results this year (Science, 24 March, p. 1693). Some theorists believe the flow occurs when atoms in an orderly crystal crowd into a single quantum wave in a process called Bose-Einstein condensation. Others say that's impossible, and one group of experimenters claims that signals of the sort Kim and Chan reported vanish when solid helium is gently heated to eliminate flaws in the crystal.

    Now, Sébastien Balibar of the École Normale Supérieure in Paris and colleagues have seen the flow of solid helium—but only in imperfect crystals consisting of several distinct “grains.” That suggests a less exotic explanation for the flow: On the boundaries between grains, the helium remains inherently liquid, and superfluid liquid helium, which also flows without resistance, seeps along the interfaces. “Balibar's experiment says, ‘Wait a minute folks; this might not be as astonishing as we thought,’” says Robert Hallock, an experimenter at the University of Massachusetts, Amherst.

    To see the flow, Balibar and colleagues placed an inverted test tube in a container of liquid helium and compressed the helium to more than 28 times atmospheric pressure to make some of it solidify. The denser solid filled the bottom of the container and grew up into the tube. The physicists then quickly melted some of the helium surrounding the tube. If the solid could flow through itself, then the level inside the tube should fall to match that of the surrounding solid. And that's precisely what the researchers saw—sometimes.

    On two occasions, the level inside the tube fell at a steady rate rather than slowing, a crucial detail that indicates the flow encountered no resistance. In 10 cases, the level inside the tube remained fixed. In fact, the solid flowed only if its surface showed cusplike ridges of the sort that would form when the boundaries between grains poke through the surface. That suggests “this effect is not due to the crystalline part but to the space between the crystallites,” Balibar says.

    Price of perfection.

    Does the faultlessness of this crystal of solid helium stop it from flowing?

    CREDIT: S. BALIBAR, E. ROLLEY, AND C. GUTHMANN/ENS-PARIS (1994)

    However, Balibar's experiment differs from Chan's in a key way, say John Beamish, an experimenter at the University of Alberta in Edmonton, Canada. Chan chills and squeezes his helium so much that only solid helium can exist. Balibar and colleagues study helium at a temperature and pressure at which solid and liquid coexist, like ice and water in glass. The solid is on the verge of melting, Beamish says, and “when a crystal melts, it melts along the grain boundaries, so at some point you would expect to have flow along them.”

    Kim and Chan also see evidence of flow in helium crammed into porous glass. That would be hard to explain in terms of grain boundaries, which would have to wend through the pores. All this suggests an intriguing explanation. “It's entirely possible that Balibar and company have discovered an interesting effect that doesn't have anything to do with the work of Kim and Chan,” says theorist Anthony Leggett of the University of Illinois, Urbana-Champaign. As with skinning a cat, perhaps there is more than one way to make solid helium flow.

  9. SOCIOLOGY

    Making Connections

    1. Karen Heyman*
    1. Karen Heyman is a writer in Santa Monica, California.

    Social network analysis made news as a possible tool for scanning phone records for security threats, but the field is exerting a broader impact in business, biology, computer networks—and movie titles

    CREDIT: D. COX AND R. PATTERSON/NCSA, UIUC

    Linton Freeman, a professor of sociology at the University of California, Irvine, has just offered the perfect alibi for murder. A colleague was known to attend every one of the talks in a weekly seminar series. At the end, as part of an experiment, Freeman's students asked the other attendees if he'd been at the last meeting. Those who attended regularly assumed that he must have been, because he'd been to every other session. Those who attended irregularly had no assumptions, and many correctly guessed that he had not been. “If you ever want to do a murder, don't show up where you always are; you'll get away with it,” advises Freeman.

    Freeman's real point is not to offer a tutorial in crime but to underscore that his specialty—social network analysis (SNA)—is good at uncovering unexpected outcomes in what may seem like obvious situations. Or more complicated ones. SNA made headlines in May, when newspaper reports fueled speculation that the U.S. National Security Agency was using it to scan phone records.

    But those in the field are quick to point out that SNA isn't simply some diabolical data-mining scheme. SNA has proven its worth in arenas as far-flung as business, biology, social policy, epidemiology, and computer networks. It also spawned the catch phrase “six degrees of separation” and a related trivia game involving actors who have worked with the film star Kevin Bacon. There's even an upcoming television show called Six Degrees. Unfortunately, its practitioners add wryly, almost everything that people outside the field think they know about it is wrong.

    SNA is a full-fledged field of research that dates back at least 70 years. Broadly speaking, it's the study of the structure of connections among individuals, organizations, and nations. Social network analysts directly observe or interview people (“nodes”) going about their daily routines, then graph their connections (“paths”) and analyze the data to find unsuspected structure. Connections that can seem trivial to analyze when only a few individuals are involved can grow to frightening complexity in larger networks, especially because one individual can have multiple kinds of connections. To tackle such problems, network analysts bring to bear techniques from statistics, graph theory, and theoretical modeling; a subset of the field, cognitive analysis, looks at people's perceptions, as in Freeman's murder tutorial. “Network analysis with a cognitive twist studies the individual's perceptions of social structure,” says statistician Stanley Wasserman of Indiana University, Bloomington, “such as your view of who the cliques are versus the actual cliques.”

    Sliding scale.

    A seminal 1998 paper showed that networks come in different degrees of randomness and connectedness. Later work modified the rings into multidimensional lattices and explained how even big networks can be easy to search.

    CREDIT: ADAPTED FROM D. J. WATTS AND S. H. STROGATZ, NATURE 393, 440 (1998)

    The results are often counterintuitive. For example, suppose you need to reach an executive in an unfamiliar corporation. You might start by contacting a “hub” person: someone with links to everyone else in the company. But not necessarily, SNA experts are fond of pointing out: The hub person may be so busy that you would be better off going through someone who isn't quite so central. In fact, Freeman says, corporations are increasingly using SNA to ascertain who actually talks to whom, rather than who merely reports to whom on their organization charts. Being able to correctly trace an individual's social network, freed from the trap of conventional wisdom, is essential for public health and public policy planning, says Gery Ryan of the RAND Corp., who is working on a study of the social connections of homeless women in order to understand how best to provide support services.

    Social networks and the life sciences

    SNA has traditionally been an interdisciplinary field, but in the late '90's, two models, both conceived by physicists, drew in even more disciplines. The first, by Duncan Watts and Steve Strogatz of Cornell University, modeled the “small world problem.” It was published in Nature in 1998, with a follow-up Nature paper in 2002. The second, on “scale-free networks,” by Albert-László Barabási of the University of Notre Dame in Indiana and his then-graduate student Réka Albert, who is now a professor at Pennsylvania State University in State College, appeared in Science in 1999 (15 October 1999, p. 509).

    Everyone has had a “small world” experience: The man you sit next to at your distant cousin's wedding turns out to be your former boss's ex-husband, that sort of thing. In 1967, Harvard University professor Stanley Milgram decided to see whether there was more to the phenomenon than just anecdotes. He asked Midwestern volunteers to send packages to strangers in Boston. The Midwesterners were not allowed to mail the packages directly but had to relay them through personal contacts. Milgram reported that the average number of intermediaries for completed chains was five, which made for a six-linked chain—the basis for the popular phrase “six degrees of separation.”

    Watts and Strogatz wondered what mathematical conditions would be needed to create a “small world” like Milgram's. Purely random sampling wouldn't produce the tight clusters of mutual friends that most of us belong to. But if the world were completely clustered, how could we all be “six degrees” apart? They concluded that networks must exist on a sliding scale between clustered and random, producing tightly knit groups of friends but also short paths that reach throughout the whole network.

    Their 1998 paper famously showed that the same mathematical technique could model a power grid, a film actor's collaborators, and the neural network of Caenorhabditis elegans. It introduced the “clustering coefficient,” a mathematical way to model the friends-of-friends effect: If John knows Mary, and Mary knows Sue, the clustering coefficient weights the likelihood of John also knowing Sue. The concept has become widely used in biological network analysis from neuroscience to bioinformatics. For example, says Albert, the clustering coefficient can be used as an objective criterion for whether two genes are considered coexpressed.

    The paper attracted enormous fanfare, but the model overlooked a crucial insight. Jon Kleinberg of Cornell University pointed out in Nature in 2000 that it could explain how Milgram's Midwesterners got near their targets, but not how they ultimately found them. “Truly random connections are good for getting you halfway around the world, but they're not good for getting you the last 100 miles; you can't aim at the target,” says Kleinberg.

    Vogue words.

    Basic ideas of social network analysis have diffused into popular culture.

    In his Nature paper, Kleinberg proposed a model that showed how in very large networks the paths to the target could be found with only a modest amount of computation; it became the basis of many computer search algorithms. “Neither we, nor anyone else in fact, understood that searchability was important; that's what Jon taught us,” says Watts.

    In a 2002 Science paper (17 May 2002, p. 1302), Watts and collaborators Mark Newman and Peter Dodds further refined the model, having realized that the key was multiple interests: A neuroscientist at the California Institute of Technology (Caltech) in Pasadena might never reach a farmer in Vermont through his network of neuroscientists, but he could do it in a short hop if they shared an interest in, say, hiking.

    Despite how often the 1998 and 2002 papers are cited, Watts despairs that the full model is not accurately understood. It's not, as the famous 1998 diagram suggests, a one-dimensional ring but a multidimensional lattice. If you think of relationships in one dimension, it's hard to imagine the “short paths” necessary to get six degrees of separation. But on a multidimensional lattice, draw a line from the neuroscientist network, down through another grid, the hiking network, and on that grid it's a short hop to a farmer in Vermont who hikes.

    Timing was on Watts and Strogatz's side. “[In 1998], there was a collision of a number of factors: massive computing and storage, a genuine social movement to put information on the Internet, which was simultaneously exciting to mathematicians and social scientists,” says Kleinberg. “The Watts-Strogatz paper really crystallized the spirit behind that and brought various communities that traditionally had not interacted closer together.”

    Those communities included biologists and neuroscientists, who had already begun to examine the network properties of cellular and neuronal processes. They would be further inspired by Barabási and Albert's model when it came out a year later in Science.

    Barabási was uncomfortable with the idea that networks could be completely random: “There's no way all these computers on the Internet would be randomly connected. There's no way all these molecules in the cells could be randomly connected. There must be some structure in this network.”

    Borrowing a term from physics, Barabási and Albert modeled what they called “scale-free” networks. Scale-free networks follow power laws. “Unlike the more familiar Gaussian ‘bell curve,’” explains Wasserman, “a power-law distribution is a very skewed distribution.” Applied to networks, this means there are highly connected hubs that get the majority of the connections. In a dynamic, growing network controlled by power laws, there will be “preferential attachment,” essentially a rich-get-richer idea: One is more likely to attach where there are already many links.

    After the paper was published, scientists seemed to find power laws everywhere. Their effect on biology has been profound, as researchers such as Marc Vidal of the Dana-Farber Cancer Institute in Boston find that in cell signaling networks there are hub proteins that are heavily interconnected and “spoke” proteins that just touch a few other proteins.

    But some remain skeptical. The novelty and significance of Barabási's idea has been challenged by Evelyn Fox Keller of the Massachusetts Institute of Technology in Cambridge in an article in Bioessays. Engineer and mathematician John Doyle of Caltech, who has published a paper challenging Barabási's concept of Internet architecture, says Barabási “used methodologies that are standard in physics but inevitably lead to errors when applied to technological or biological systems.”

    But Watts, who, like Barabási, wrote a popular book, thinks that much of the backlash derives not so much from the flaws of the scale-free model as from the aggressive manner in which it was promoted as a “universal theory” of networks. (Barabási's book is Linked; Watts's is Six Degrees.)

    At the moment, however, Barabási is more intrigued with network dynamics, trying to understand how the architecture of a network constrains the activities on it. He has also begun to work with more traditional social scientists, including political scientist David Lazer of Harvard's Kennedy School of Government, combining a physicist's expertise with large data sets and a social scientist's expertise on human behavior.

    Middle way.

    A network plot of references in published papers highlights how social network analysis draws on insights from both physical and social sciences.

    CREDIT: D. LAZER, I. MERGEL, A. FRIEDMAN/HARVARD UNIVERSITY (2006)

    In fact, the most striking example of “six degrees” seems to be the amount of collaboration that goes on in SNA, even between those who disagree with each other. Barabási and Watts recently co-edited a book called The Structure and Dynamics of Networks with Newman. Wasserman speaks for many of the SNA old guard when he complains about “Johnny-come-lately physicists,” reinventing the wheel. But he still shares two grants with Barabási. One funds the development of a new network-analysis software, the “NetworkBench.” The other was to organize the NetSci2006 conference, which drew more than 200 sociologists, physicists, mathematicians, computer scientists, and biologists.

    Lazer points to the emergence of the term “network science” to capture the attempt to bridge the analysis of human and nonhuman networks by researchers across the spectrum. SNA, in other words, is rapidly extending its own nodes and paths.

  10. SOCIOLOGY

    Looking for Patterns

    1. Karen Heyman

    The U.S. National Security Agency's analysis of telephone databases likely combines social network analysis with probabilistic models developed by Soviet-era Russian mathematicians that were later refined in Cold War American intelligence projects.

    Probabilistic models allow researchers to estimate the statistical likelihood of certain events occurring. They have been used to determine, for example, whether the pattern of purchases on a credit card is so unusual that the card has likely been stolen. But they are also used along with social network analysis to crack more sophisticated kinds of theft. For example, says fraud expert Malcolm Sparrow of Harvard University, sometimes credit-card fraud is a well-planned attack by a network of thieves. One ring member working in, say, a luxury hotel duplicates victims' credit cards, and weeks later, the cards are used in a foreign country for purchases that can easily be converted to cash. Although it would not be unusual for one wealthy person to travel abroad and buy jewelry, two dozen people buying jewelry in the same foreign city on the same day, who can't be connected through a tour group or family event, triggers alarms: “These are things you would never find by looking at one account or its behaviors,” says Sparrow.

    Connect the dots.

    Russian mathematicians Markov (top) and Kolmogorov paved the way for modern pattern-recognition algorithms.

    CREDIT: YEVGENY KHALDEI/CORBIS

    The underlying math for the most popular probabilistic models was first developed by Andrey Andreyevich Markov, a mathematician who taught at the University of St. Petersburg in Russia. Andrei Nikolaevich Kolmogorov, winner of the Lenin Prize, extended Markov's pioneering probability work, laying the foundation for its application in fields as diverse as fraud detection and bioinformatics.

    A Markov chain allows you to “chain” probabilities together, with each link affecting the next forward link in the chain. Go through a yellow traffic light, and it has a “transition probability” of turning red before you're fully through the intersection. If it does turn red, then the probability you could receive a ticket increases.

    A more complex version, known as the Hidden Markov Model (HMM), was developed in the 1960s as part of classified research at the Institute for Defense Analyses in Princeton, New Jersey. An HMM allows you to discover the unknown cause of a current state. “Whereas a Markov chain just has states, an HMM has both ‘hidden states’ and ‘observations’: the observations are what you see, and the hidden states are the ‘unknown causes,’” says mathematician Jon Kleinberg of Cornell University.

    Take office couture. “The outfits that your colleague wears are the observations, and the hidden state is what she's done that day,” says Kleinberg. “For example, hiking boots as part of an observed outfit may mean that the hidden state—what she did that day—includes hiking.” They can also help you predict a likely outcome, such as whether your friend is going hiking tomorrow. But most importantly, HMMs can help you trace the source of an anomaly. If your colleague always wears a different outfit, then the same outfit 2 days in a row stands out. Both gossips and mathematicians can work back from that observation to the “hidden state”: What is the probability she spent the night in someone else's apartment? What is the probability she worked all night?

    Combining aspects of social network analysis with Markov models provides the analytical power to probe interconnections in the genome or, say, several billion phone records.

  11. SOLAR PHYSICS

    Space Weather Forecasters Plan a Boost in Surveillance Missions

    1. Dennis Normile,
    2. Richard Stone

    Two new missions to track solar outbursts, radio scintillation, and geomagnetic storms could prove vital now that older satellites are running on fumes

    Scintillating idea.

    The C/NOFS satellite will probe ionospheric “bubbles” that disrupt satellite transmissions near the equator.

    CREDIT: COURTESY OF LAILA JEONG

    BEIJING—Whether it's a thunderstorm supercell spawning a tornado or warm ocean waters feeding a hurricane, turmoil in the lower atmosphere follows predictable rhythms. Not so space weather. Although it's known that solar outbursts spark geomagnetic storms, Earth can be blindsided by devastating but low-profile events. Two missions described here last week at the 2006 Western Pacific Geophysics Meeting aim to fill critical gaps in space weather surveillance.

    First on the lineup is a probe to monitor disturbances in the upper atmosphere that blight satellite communication and Global Positioning System navigation. To study and forecast this phenomenon, the U.S. military plans to launch the Communication/Navigation Outage Forecasting System (C/NOFS) satellite in 2008. China meanwhile is planning a major solar observatory, dubbed KuaFu, with a launch target of 2012, which would track solar outbursts and geomagnetic storms in fine detail. “These two missions are very important and promising for space weather forecasting,” says Kazuo Shiokawa, a solar physicist at Nagoya University in Japan.

    Geomagnetic storms occur when surges in the solar wind warp Earth's magnetosphere, sending energy and charged particles into the upper atmosphere. The fiercest storms occur during crests in the sun's 11-year activity cycle, marked by powerful solar flares and blizzards of charged particles called coronal mass ejections (CMEs). Strong storms can short-circuit satellites and power grids. They also pose a risk for space travel. For example, if the Apollo 17 moon mission in December 1972 had been launched 4 months earlier, “the astronauts would probably have been killed” by a barrage of energetic particles from an extraordinary series of superflares and CMEs, says Rainer Schwenn, a solar physicist at the Max Planck Institute for Solar System Research in Katlenburg-Lindau, Germany. Future travelers to the moon or Mars would face the same hazard.

    Geomagnetic storm forecasting took a giant leap forward 10 years ago, after the Solar and Heliospheric Observatory (SOHO)—a joint NASA and European Space Agency mission—began orbiting Lagrangian point L1, an interplanetary doldrums where balanced gravitational forces keep SOHO in a fairly stable perch between sun and Earth. At the meeting, speakers showed a movie of a SOHO camera lit up by a hail of CME particles in October 2003, the precursor to one of the biggest storms ever seen. SOHO gave several hours' warning, ample time to limit damage by powering down satellites. Still, “a significant number of storms cannot be predicted,” says Nandita Srivastava of the Udaipur Solar Observatory in India.

    Nor are geomagnetic storms the only hazard. Satellite transmissions to Earth can be disrupted—an effect called radio scintillation—by “bubbles” in the ionosphere, the ionized upper part of Earth's atmosphere. The origins of these patches of low-density plasma, which can be hundreds of kilometers across and usually occur between dusk and midnight near the equator, are a mystery. “Satellite TV may suddenly disappear when a bubble is passing above,” says Shiokawa. The $100 million C/NOFS mission, run by the U.S. Air Force Research Laboratory and the Department of Defense's Space Development and Test Wing in Kirtland, New Mexico, will be the first to sample plasma density continuously in search of bubbles. “It will be a fantastic mission for improving our ability to forecast these hazards as well as understand the basic mechanisms responsible for creating them,” says space scientist Michael Liemohn of the University of Michigan, Ann Arbor.

    Although C/NOFS's primary objective is to give early warning to the U.S. military, “data will not be restricted,” says Guan Le, a C/NOFS project scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Originally planned for launch on a Pegasus XL rocket in 2004, technical problems have delayed the launch to mid-2008.

    Further in the future, KuaFu could become China's most ambitious space science effort yet. Named after a character in a Chinese legend who chases the sun, KuaFu would consist of a trio of satellites. A probe at L1 would image solar flares and CMEs and give up to 3 days' warning of impending geomagnetic storms. And two satellites in the magnetosphere would provide round-the-clock monitoring of auroras in the Northern Hemisphere, tracking storms as they develop. “KuaFu has numerous firsts, if it goes,” says Eric Donovan, a mission collaborator at the University of Calgary in Canada.

    China would build and launch the KuaFu satellites. Although Canadian and European institutions are taking the lead in defining scientific objectives, officials at the China National Space Administration (CNSA) insist that homegrown scientists play a significant role in developing the instruments, says mission chief Tu Chuanyi of Peking University. “With this mission,” says Schwenn, a KuaFu collaborator, “China wants to join the international space science club.”

    CNSA expects to complete a review of KuaFu by spring and give a final go-ahead in 2009. With SOHO and another L1 probe, NASA's Advanced Composition Explorer, having long surpassed their design lives, KuaFu could prove vital. “Within a few years, we might be completely blind again to mass ejections,” Schwenn says. And that could be as ruinous as turning a blind eye to tornadoes or hurricanes.

  12. CLIMATE RESEARCH

    Waiting for the Monsoon

    1. Catherine Brahic*
    1. Catherine Brahic is a writer for SciDev.Net.

    No one can predict the heavy summer rains that bring the Sahel back to life each year. A new 10-year data research program aims to improve forecasting and model building

    MALBOROU, NIGER—On a hot afternoon in July, a chanting, dancing, drumming crowd has formed on the main road that crosses the village, half an hour's drive north of the border with Benin. The villagers have begun the rain dance. They want to bring on the monsoon that drenches the earth every year from June to August; it's now several weeks late. The millet crop needs 3 months to ripen, and time is running out for planting the seed.

    The people who live on the flat, reddish-brown, dusty landscape of southern Niger depend heavily on the West African monsoon: In 2001, 39% of the Nigerian gross domestic product came from agriculture, which employs 90% of the workforce and involves virtually no irrigation. Niger's neighbors throughout the Sahel, the strip of Africa that stretches across the continent directly south of the Sahara, face similar circumstances. From the 1970s to the 1990s, the Sahel suffered severe drought, leading to some of the worst famines in recent history. Precipitation levels began to rise beginning in the late '90s, but crops were once more devastated by drought in 2005; 2006 did not begin well.

    Climate scientists cannot say what has delayed the monsoon this year or whether the delay is part of a larger trend. Nor do they fully understand the mechanisms that govern rainfall over the Sahel. Most frustrating, perhaps, is that their prognostic tools—computer simulations of future climate—disagree on what lies ahead. “The issue of where Sahel climate is going is contentious,” says Alessandra Giannini, a climate scientist at Columbia University. Some models predict a wetter future; others, a drier one. “They cannot all be right.”

    Extremes.

    Much of the land of the West African Sahel is parched and unproductive—until it is drenched by midsummer storms.

    CREDIT: CATHERINE BRAHIC

    One obvious problem is a lack of data. Africa's network of 1152 weather watch stations, which provide real-time data and supply international climate archives, is just one-eighth the minimum density recommended by the World Meteorological Organization (WMO). Furthermore, the stations that do exist often fail to report.

    To fill the gap, a European-led consortium of more than 140 European, American, and African institutions has set out to monitor and thoroughly describe the West African monsoon over a 10-year period. Launched in 2001, the African Monsoon Multidisciplinary Analysis (AMMA) has brought together more than 400 researchers. This summer, they are undertaking the most intense data collection to date. From January through September, they plan to monitor not only rainfall but also every aspect of the monsoon: the structure of clouds and their water content, the amount, movement, and characteristics of particles suspended in the air, the distribution of water in river systems, and sea surface temperatures in the Gulf of Guinea and across the Atlantic, as well as temperature, humidity, pressure, and wind speeds at various altitudes across the region.

    Holding on.

    Farm villagers depend on a millet crop—and an annual blast of rainwater—for survival.

    CREDIT: CATHERINE BRAHIC

    This effort is badly needed, because what happens in Africa—particularly in the Sahara and Sahel regions—affects the entire planet. It is one of the most important sources of heat in the atmosphere. Dust from the Sahara is a significant contributor to global aerosols. Weather systems above the Sahel during the monsoon give rise to some of the hurricanes that sweep across the Atlantic each year. Yet this critical piece of the climate system remains out of focus.

    The hope is that AMMA's data-gathering effort will not only improve regional forecasts but the performance of global climate models as well. They need all the help they can get: Even the latest climate models used by the Intergovernmental Panel on Climate Change (IPCC) vary in what they predict for the Sahel over the coming century, ranging from drought to a significant increase in rainfall. “The Sahel remains one of the toughest challenges for modeling,” says Richard Washington, a climate scientist at the University of Oxford in the U.K. who co-authored a 2004 G8 report on African climate science. “The debate on the Sahel has hinged on models, … but they are not a particularly sharp tool in that region.”

    Flawed and divergent

    Several recent studies have underlined the disharmony in the models. In May 2005, Martin Hoerling and James Hurrell of the U.S. National Oceanic and Atmospheric Organization (NOAA) and the National Center for Atmospheric Research presented results at the American Geophysical Union annual meeting that seemed encouraging for the Sahel. Using averaged results from a group of the latest atmospheric general circulation models, they found that they predicted more rain in the Sahel in the first half of the 21st century.

    In November, Isaac Held of NOAA published results from a new model that reproduced 20th century climate variations in the Sahel better than any other. Looking forward, however, he wrote in the Proceedings of the National Academy of Sciences that the model predicted more drought for the Sahel.

    More recently, Kerry Cook and Edward Vizy of the Department of Earth and Atmospheric Sciences at Cornell University combined approaches in a paper in press at the Journal of Climate. Of the 18 models being used in the IPCC's fourth assessment, they selected three that best reproduced 20th century climate and compared their predictions for the 21st century. Whereas one model simulated severe drying across the Sahel late in this century, another predicted wet conditions for the entire century. The third projected “modest” drying.

    “There must be something in the models' physics that is causing them to respond differently,” says Giannini. Deciphering the problem, she says, “is a scientific priority that requires really getting into the [models'] bowels.”

    Making repairs

    There are telltale signs that the algorithms that drive these models are flawed, some researchers say. For instance, they point out that many models show cooler present-day sea surface temperatures close to the Americas and warmer ones by the African coast, when in fact observation shows that the gradient should be the other way around. Cook and Vizy, meanwhile, say that some models place the maximum monsoon rainfall over the Gulf of Guinea instead of over West Africa, where it should be.

    Jan Polcher, a climate modeler at France's research agency, CNRS, suggests that the problem lies in a wide variation in sensitivity to factors such as land or sea surface temperatures. Others point to lack of resolution: Small turbulences can play an important role in the distribution of energy inside a cloud system, but the models resolve areas of 100 square kilometers or more.

    “One thing is clear,” says Washington. “If we carry on with the models as they are, we are just going to get different answers all the time, so we need to fix their basics.”

    A solution suggested by Cook and Vizy is to assess the likelihood of the scenarios generated by the models by looking for obvious flaws (such as placing maximum rainfall over the ocean instead of over land). Using this method, they concluded that the third model they examined, which predicted modest drying of the Sahel, was most likely to be accurate.

    For Polcher, however, continually modifying and assessing the models so that they match observed climate—without understanding the underlying mechanisms—is a moot exercise. That's why he's leading AMMA's vast initiative to fill the West African climate data gap, together with Thierry Lebel of the French Institut de Recherche pour le Développement, Jean-Luc Redelsperger of the Centre Nationale des Recherches Météorologiques, Chris Thorncroft of the University of Albany in New York, and Douglas Parker of the University of Leeds, U.K.

    The effort will cost $50 million over 10 years; weather instruments and equipment are costly to buy and operate. Remote sensing through satellites will be useful only if information gathered from the images is validated with initial ground observations.

    Ground truth.

    A European-led research project is funding a 10-year study in West Africa that will gather water data, monitor the atmosphere, and feed information into climate models.

    CREDITS: CATHERINE BRAHIC

    Central to AMMA's effort is a plan to rebuild a network of stations that release radiosondes, atmospheric sensors carried aloft by helium-filled balloons. WMO members have agreed to launch and track balloons two to seven times a day across a worldwide grid to provide basic data on temperature, humidity, and pressure. The African section of this global grid fell into disrepair during the 1980s. By the late 1990s, the 8 million square kilometers of West Africa had only eight WMO reporting stations—the same number as France.

    Through negotiations with West African governments and meteorological agencies, AMMA has restored the network to 17 stations releasing balloons up to eight times a day. Each balloon takes measurements every 2 to 4 seconds and relays the information, along with a GPS reading of its location, to a data collection center via satellite.

    The balloons cost about €1000 and are rarely returned to experimenters. (Earlier this year, one landed in a Nigerian village, where it remains closely guarded.) After a decade of collecting data on the monsoon, AMMA aims to show which stations are critical to support weather forecasting and climate modeling over the long term. AMMA hopes international agencies will see the value of the network and help underwrite it.

    Turning point

    Three weeks ago, in the savanna that surrounds Niamey, Niger's capital, a tantalizing night storm brought out dozens of seasonal ponds and a cacophony of frogs. Everywhere, men began to hoe the ground, with children trailing behind: two strikes to lift the earth just enough to drop a few seeds in before moving over a meter and starting again. Just 5 kilometers away, the ground remained dry. The rain had come from a small cloud that quickly dumped its 30 mm and moved on.

    But now, fears that 2006 would once more bring famine to the region have abated. The monsoon rains arrived at the end of July. For farmers, any later would have been too late. More data might make it possible to know whether the delay is a symptom of long-term climatic change or simply a seasonal variation.

  13. U.S. HOMELAND SECURITY

    Congress Dials Back Research on Understanding Terrorism

    1. Yudhijit Bhattacharjee

    Legislators cite shoddy management in cutting the 2007 budget for university research within the Department of Homeland Security

    Arie Kruglanski spends a lot of time thinking about what drives terrorists. A social psychologist at the University of Maryland (UM), College Park, Kruglanski believes that basic research on human behavior can fill an important niche in the fight against terrorism. “The intelligence community can address the mosquito,” he says, referring to the pursuit of suspected or actual terrorists. “Academic researchers can help to dry out the swamp.”

    Congress endorsed that argument in 2002 when it included a robust science directorate within the U.S. Department of Homeland Security (DHS) that was created after the 11 September 2001 terrorist attacks. In short order, the directorate began funding a network of centers at universities around the country—including one that Kruglanski co-directs at UM—to generate knowledge aimed at thwarting future terrorist attacks. But only 2 years after the first center was created, Congress has become so unhappy with DHS's management of its research portfolio that it is poised to levy a double-digit funding cut next year across the department's $1.2 billion science and technology directorate, including the centers program. A Senate panel last month labeled the directorate “a rudderless ship without a clear way to get back on course.”

    Downward slope.

    DHS's Melvin Bernstein says that the expected cuts to the university program will prevent the agency from funding any new centers.

    SOURCE: CONGRESSIONAL RESEARCH SERVICE; (IMAGE) MIKE LOVETT/LAB

    Funding for the university program, which peaked in 2005 at $70 million (see graphic), would decline for 2007 to $50 million in the House version and $52 million in the Senate. (The differences will be worked out in the weeks ahead.) The two bodies have proposed even larger cuts—24% and 18%, respectively—in the agency's overall research and development budget, which emphasizes applied work for securing airports and ports and minimizing the risks of biological, chemical, and explosive attacks. The legislators cited poor financial management, the absence of a research plan, and the lack of progress in developing technologies to protect the nation. In addition, senators would prohibit DHS from funding any center for more than 3 years. Congress expected DHS to spread the wealth, establishing a center and then letting the university find alternative sources of funding after 3 years. The Senate language does not prevent a university-based center from entering a subsequent competition, a staffer added.

    The reduced funding would freeze the university program at seven centers—including one announced last week—rather than the 10 that the department had hoped to support. And agency officials say the 3-year rule will change the character of the centers, moving them away from exploring fundamental research questions toward work on short-term problems. “We'd have to choose narrower topics instead of open-ended problems that reach into more fundamental aspects of homeland security,” says program director Melvin Bernstein. The shorter timeline would also tilt training toward master's degree students rather than Ph.D.s.

    Legislative aides say that members felt they had no choice but to crack down on DHS's research activities after finding what the House appropriations committee calls “financial reporting deficiencies, including serious difficulties maintaining accurate financial records related to obligations and disbursements.” In short, says one congressional staffer, “the directorate [has failed] to answer how it is executing its programs and what it has done with its money.”

    One example is some $67 million within the university program, which includes fellowships and other initiatives, that Congress awarded over the past 4 years but DHS had not obligated. Bernstein said last week that the money has “now been fully accounted for,” although he declined to provide a breakdown of how or when it will be spent. “When we started, it took us anywhere from 6 to 8 months to start a center, so the money came in a lot faster than we were able to spend it,” he says. “Now we have a steady state.” However, Bernstein says these newly committed funds won't allow DHS to add to its stable of centers or to award new fellowships next year.

    Shaun Kennedy, deputy director of the DHS-funded National Center for Food Protection and Defense (NCFPD) at the University of Minnesota, Twin Cities, says it's hard for his and other centers to find other federal backers because their work is so multidisciplinary. NCFPD's goal of reducing the potential for contamination in the nation's food supply and minimizing the effects of an attack on it, he notes, requires cross-fertilization across disciplines as diverse as epidemiology, food microbiology, economics, and risk communication. “DHS is the only agency whose mission justifies funding this type of work,” he says.

    UM's Kruglanski and other center directors are hoping that legislators eventually drop the 3-year funding limit when they reconcile the two spending bills, even if they don't restore funding to the program. “Our center is investigating the causes, motivations, and recruitment mechanisms that drive terrorism,” he says. “We need to integrate all the knowledge on terrorism from the past 30 years and carry out a broad set of studies. It doesn't make sense for us to think short-term.”

Log in to view full text

Via your Institution

Log in through your institution

Log in through your institution