News this Week

Science  09 Mar 2007:
Vol. 315, Issue 5817, pp. 1348
1. NUCLEAR WEAPONS

Livermore Lab Dips Into the Past to Win Weapons Design Contest

1. Eli Kintisch

What's the trick to winning a competition to design a nuclear bomb when you can't show it works by blowing it up? Imitate an already-tested weapon just enough to prove your design will do the job.

That's the approach followed by weapons physicists at Lawrence Livermore National Laboratory in California in a Department of Energy (DOE) contest to design a new H-bomb. The challenge, which pitted Livermore against archrival Los Alamos National Laboratory in New Mexico, was to design a weapon that wouldn't require nuclear testing, a practice the U.S. effectively forswore in 1992. And DOE officials say the soundness of Livermore's design was superior to a more novel proposal from Los Alamos. “This was about starting off with the most conservative approach,” says National Nuclear Security Agency (NNSA) interim head Thomas D'Agostino, who announced the outcome last week.

The Reliable Replacement Warhead (RRW) program was set up by Congress to modernize the aging U.S. stockpile. The two labs were competing to replace a warhead that currently sits atop missiles in U.S. submarines. Some of the bombs are more than 30 years old, and scientists can't detonate them to see how well they've held up. Although the focus is reliability, RRW is also intended to create safer, more secure, and greener weapons. But critics say the program, which could cost more than $725 million by 2012, is misdirected and counter-productive. “This could serve to encourage the very proliferation we are trying to prevent,” says Senator Dianne Feinstein (D-CA). To look to the future, both teams started with the past. Bolstered with engineering crews supplied by Sandia National Laboratories in Albuquerque, New Mexico, they began by scouring the archives of the roughly 1000 underground nuclear tests that the United States has performed. But they learned different lessons from the data. Livermore physicist and RRW leader Hank O'Brien says that a weapon design tested four times in the 1980s but deemed to be too “oversized and heavy” for use during the Cold War turned out to be a good basis for the new bomb. In contrast, Los Alamos official Glen Mara acknowledges that his team's design was not in the “unique sweet spots of previous [specific] designs that were in the stockpile,” although many nuclear tests and models informed its design. With basic design in hand, the California team went on to perform some 28,000 simulated explosions of the device on the lab's 100-teraflops supercomputer. Those runs suggested how certain manufacturing defects or design changes would affect its performance when subjected to radiation, shaking, and high temperatures. In early 2006, the teams exchanged what one official described as “phone-book-sized” dossiers of classified data on their weapon for intralab peer review, which also yielded some improvements. (Some of Los Alamos's RRW features might be integrated into the California design.) But for the Livermore team, says O'Brien, a key moment came last August, when a mockup of the weapon showed “an excellent agreement with our [computer] model” during a monitored detonation using conventional explosives at Livermore's Site 300 facility. That was both “very satisfying” to the team and extremely useful in proving the design's “credibility,” says Livermore manager Bruce Goodwin. Both winning and losing designs featured controls to thwart detonation by thieves and explosives that are safer to handle, two key facets NNSA had wanted. They also contained no beryllium, an element used in nukes that poses health problems. The New Mexico team added other bells and whistles, including a feature resistant to static electricity, which can cause accidental detonations. The New Mexico design had “many features that were more transformational … [and] some would argue are better” than their rival, says D'Agostino, a reversal of the lab's reputation as being the more conservative of the two labs. But he says Livermore's experimental “pedigree” of four nuclear tests gave it the edge. That empirical evidence may not be enough to keep the program moving forward, however. Former Livermore director Bruce Tarter says he doesn't think DOE has yet come up with a specific method to measure whether the design meets the criterion of being certifiable “without requiring nuclear testing” as it advertises. Some lawmakers in Congress are wondering whether RRW is even needed given recent findings that the plutonium inside the weapons can last longer than expected (Science, 8 December 2006, p. 1526). They also worry about its impact on efforts to curb nuclear proliferation. The program has a “make-it-up-as-you-go-along character,” says Representative Pete Visclosky (D-IN), chair of a key House funding panel. He's threatened to cut funding for RRW unless he sees a more “coherent” explanation of its purpose. A panel of nuclear weapons experts convened by the American Association for the Advancement of Science (which publishes Science) has called for “independent review teams” to monitor the next stages of the RRW program. Los Alamos has already been given that role as a consolation prize. Livermore's O'Brien says he's “honored” to have won and motivated to take the next steps, which include full-scale engineering studies allowing researchers to see how the design reacts to a “lifetime of shake, rattle, and roll.” Says Goodwin, “We've taken the first steps, but it's a long road.” 2. WILDLIFE STUDIES Researchers Explore Alternatives to Elephant Culling 1. Robert Koenig PRETORIA, SOUTH AFRICA—Outsized tracking collars and dart guns will be in demand as scientists here embark on an ambitious new research program announced last week as part of the government's draft policy to manage South Africa's growing elephant population. The policy's most controversial provision would allow limited culling—which is not being done in any African country—if other approaches fail. Presenting the policy at Addo Elephant National Park, Environmental Affairs Minister Marthinus van Schalkwyk said a multiyear research plan, suggested by a scientific advisory panel, “will hopefully reduce the scientific uncertainty” about the animals' numbers and movements while managers deal with “immediate challenges” such as minimizing the damage they cause in parks and reserves. In Kruger National Park, for example, the number of elephants has risen from 7800 to about 12,500 since culling was stopped in 1994. Numbers are also reported to be on the rise in Botswana and Zimbabwe. This contrasts sharply with the trend in west and central Africa, where ivory poaching has decimated some elephant populations. Based on the amounts of ivory seized and DNA source tracking, a research group estimated in the Proceedings of the National Academy of Sciences (PNAS) last week that 23,000 African elephants were killed for their tusks last year alone. “Overall, the elephant populations might be increasing in perhaps five or six elephant range states” in southern Africa, says the lead author of the PNAS study, Samuel K. Wasser, director of the Center for Conservation Biology at the University of Washington, Seattle. But declining populations are a problem in “the other 30-plus range states.” Strict policing has limited ivory poaching at South Africa's game parks, but many ecologists blame poor management—including the drilling of more than 300 artificial watering holes over half a century in Kruger Park—for exacerbating the park's ecological problem by changing elephants' movements and feeding patterns. Zoologist Rudi van Aarde, chair of the University of Pretoria's Conservation Ecology Research Unit, argues that one remedy is to reduce the number of watering holes and remove fences to create transborder “megaparks.” Several such Transfrontier Conservation Areas have been mapped and are being negotiated in southern Africa through the efforts of the independent Peace Parks Foundation. In addition, as part of a long-term megaparks research project, van Aarde's group has outfitted 91 elephants in seven southern African countries with GPS tracking collars. He believes that the new South African research program should focus mostly on “adaptive management,” monitoring the ecological impact of specific actions, such as removing fences or watering holes. And tracking needs to continue for “several years,” says Norman Owen-Smith, a large-mammal ecologist who directs the Centre for African Ecology at the University of the Witwatersrand in Johannesburg. “Most elephant movement studies so far have been superficial.” Both van Aarde and Owen-Smith are members of the Elephant Science Round Table, which the environment minister established last year to try to reach a scientific consensus. In December, the 21-member panel, which included Park Service elephant experts, submitted a 13-page plan that calls for the development of predictive models on elephant management, among other goals. The ministry has allocated about$700,000 to start the work and will seek additional funding later.

The environment minister suggests culling as a last resort if other options—including relocation, range manipulation, and contraception—don't work. But neither Owen-Smith nor van Aarde, both of whom generally support the minister's plan, thinks that culling is now warranted in Kruger Park, and both question its long-term effectiveness. “There is no point in culling at Kruger,” says van Aarde. “We know now that the 27-year culling program there, which removed nearly 17,000 elephants, did not reduce their impact in the long run.”

3. METEOROLOGY

A Dose of Dust That Quieted an Entire Hurricane Season?

1. Richard A. Kerr

The 2006 hurricane season was looking grim. Three hurricanes had ripped across Florida during the 2004 season. Four hurricanes, including Katrina, had ravaged the Gulf Coast in 2005. Now meteorological signs were unanimous in foretelling yet another hyperactive hurricane season, the eighth in 10 years. But the forecasts were far off the mark. The 2006 season was normal, and no hurricanes came anywhere near the United States or the Caribbean.

Now two climatologists are suggesting that dust blown across the Atlantic from the Sahara was pivotal in the busted forecasts. The dust seems to have suppressed storm activity over the southwestern North Atlantic and Caribbean by blocking some energizing sunlight, they say. “I think they're on to something,” says hurricane researcher Kerry Emanuel of the Massachusetts Institute of Technology in Cambridge. Dust “might play a big role” in year-to-year fluctuations in hurricane activity.

As the 2006 season approached, conditions looked propitious for another blustery hurricane season. In particular, there was no sign of El Niño, whose Pacific warming can reach out to the Atlantic and alter atmospheric circulation to suppress hurricanes there. But, unremarked by forecasters, an unusually heavy surge of dust began blowing off North Africa and into the western Atlantic at the 1 June beginning of the official hurricane season. Two weeks later, the surface waters of the western Atlantic began to cool compared with temperatures in the previous season.

Climatologists William Lau of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and Kyu-Myong Kim of the University of Maryland, Baltimore County, in Baltimore argue in the 27 February issue of Eos that the arrival of the thick dust and the subsequent cooling were no coincidence. The dust blocked some sunlight and cooled the surface, they say. That cooling went on to trigger a shift toward less favorable conditions for the formation and intensification of storms in the western Atlantic, they argue. As a result, no storm tracks crossed where nine had passed the previous season.

Lau and Kim find that historically, El Niño's influence on Atlantic storms has in fact prevailed in the eastern tropical Atlantic, as it may have done last year when it put in a surprise appearance beginning in August. But in the west, near the Caribbean and the United States, dust has been the dominant external influence, they found. “We're not denying El Niño had an impact,” says Lau, but “maybe we have neglected an equally important factor, if not a more important factor.”

Many hurricane researchers are intrigued but cautious. “The authors have an intriguing hypothesis,” says Christopher Landsea of the National Hurricane Center in Miami, Florida, but “there's not much evidence that there is a direct cause and effect going on here.” And if dust were involved, it would have been more complicated than a simple cooling, says Jason Dunion of the National Oceanic and Atmospheric Administration's Atlantic Oceanographic and Meteorological Laboratory in Miami. The dust comes in a layer of air whose extreme dryness and high winds are thought to discourage storm development and intensification as well.

If dust is a major factor in the Atlantic, it will only complicate forecasting the severity of hurricane seasons. Anticipating the arrival of El Niño is proving tricky enough. Predicting far-traveled Saharan dust months ahead—both the necessary North African dryness and the dust-carrying winds—could be formidable.

4. EVOLUTION

Robot Suggests How the First Land Animals Got Walking

1. Elizabeth Pennisi

Salamandra robotica is a triathlete. She walks. She crawls. She swims. One of very few robots capable of multiple modes of locomotion, this salamanderlike machine has demonstrated that it may have been relatively easy for early animals to take their first steps on land. From a neurological perspective, inducing a transition from swimming to walking is unexpectedly straightforward, explains Auke Ijspeert, a physicist at the Swiss Federal Institute of Technology in Lausanne. Thanks to Salamandra, he and his colleagues have shown that merely changing the strength of the brain signal driving locomotion can determine whether an animal walks or swims. Once the neural networks for moving legs were in place, little additional neural circuitry was required, Ijspeert and his colleagues report on page 1416.

“This is clearly an excellent fusion of biology and robotics to test neurological and evolutionary hypotheses,” says Frank Fish, a biomechanist at West Chester University in Pennsylvania. “This paper will be a high-profile example of how robots can be used as surrogates for living and fossil systems.”

Salamanders are a lot like the first land-based tetrapods. These amphibians swim in the same manner as primitive fish, such as lampreys, and they waddle, with legs splayed out, like alligators and their ancient relatives. The salamander “represents therefore a key animal to understand the evolutionary transition from swimming to walking,” says Ijspeert, who combined forces with INSERM neurobiologist Jean-Marie Cabelguen of the University of Bordeaux, France, and two graduate students to determine how the brain controls this amphibian's movements compared to the lamprey's.

They focused on two networks of nerve cells, called central pattern generators, located along the spine. When a network is activated, its individual nerve cells alternate between firing and being quiet, causing rhythmic muscle contractions. Both the lamprey and the salamander have one network to drive the body musculature. For swimming, this network sends waves of muscular contractions down the body, repeatedly creating S-shaped waves that move tailward. Amphibians have a second one, which controls the limbs.

In 2003, Cabelguen and his colleagues discovered the region of the salamander's midbrain that fires off signals to these two central pattern generators. When the researchers gently stimulated this part of the brain electrically, they caused the limbs to move as if walking. As they gradually increased the applied current, neural activity in the limbs sped up until finally the nerve cells shut down. At this point, the amphibian's limbs stopped moving and the body started undulating much faster, as in swimming.

Ijspeert's group developed a mathematical model of this transition, from which they concluded that the limbs' central pattern generator interfered with the other neural network's ability to set up the S-waves. This interference produced the slower body bending necessary for walking. Only when the limb's central pattern generator was shut down was the salamander's other network of nerve cells free to fire as fast as needed to generate swimming or, on land, crawling.

Ijspeert then built Salamandra robotica to test the mathematical model's predictions. About 85 centimeters from snout to tail tip and with four rotating legs and six movable joints along her body, she is powered by 10 motors instead of muscles. Using a remote control, Ijspeert and his graduate student Alessandro Crespi sent signals of varying strength to Salamandra.

As in Cabelguen's experiments, less intense signals caused the robot to walk. With stronger signals, the legs sped up. But with the strongest signals, the legs stopped moving and Salamandra began slithering. “This close correspondence suggests that the researchers may have accurately recreated some of the actual neural control mechanisms salamanders use,” says John Long, a biomechanist at Vassar College in Poughkeepsie, New York. The results, Long and others say, suggest that early animals didn't need to invent completely new neural pathways to expand their locomotor repertoire.

Some researchers think this simple mechanism is not the whole story, however. Robert Full, an integrative biologist at the University of California, Berkeley, says Ijspeert and his colleagues “definitely need not only to include motion in their analyses but also the mechanics of the body and an understanding of the environment.”

Nonetheless, says Long, the robot is ‘the best I've seen in terms of combining, coordinating, and alternating different vertebrate propulsive mechanisms.” If nothing else, adds Miriam Ashley-Ross, a functional morphologist at Wake Forest University in Winston-Salem, North Carolina, “I think that more collaborations between biomechanists and neuroscientists and experts in computer modeling will start up and flourish, spurred on by this paper.”

5. U.S. SCIENCE POLICY

Report Tells NSF to Think More Boldly

1. Jeffrey Mervis

Has the U.S. National Science Foundation (NSF) gotten too conservative? A new draft report from its oversight body calls on the federal research agency to be more receptive to funding wild-eyed ideas that, just maybe, could revolutionize science (nsf.gov/nsb).

The agency's director, Arden Bement, thinks the board's proposal for a separate “transformational research initiative” is itself a bit over the top. The $6 billion foundation is already doing everything it can to identify “potentially transformative” research, says Bement, adding that another program would tax an already overburdened staff. NSF's peer-review system is widely seen as the gold standard for selecting high-quality research proposals. But board members say they are worried that some scientists don't even bother to apply for grants for ideas that cut across the scientific grain because of what the draft report calls “the external perception that NSF is not as welcoming as it should be to such research.” To erase that perception, the report suggests “a new, distinct, and separate foundation-wide program designed to solicit and support transformational and paradigm-challenging proposals.” The mechanism would serve scientists with brilliant but not-ready-for-prime-time ideas, the kind that “might be at odds with the current thinking in the field,” says board member Douglas Randall, a plant physiologist at the University of Missouri, Columbia. “And if it goes into that merit-review meat grinder, it'll just get spit out.” NSF is currently analyzing the results of a community survey designed in part to determine whether that perception is true. Bement suspects that concerns have grown in step with the rising share of top-rated proposals that NSF is unable to fund, now one in three. He says he agrees with the board on the importance of funding as much potentially transformative research as possible but not with its conclusion that NSF should launch a separate initiative. “Every directorate has special programs for frontier research,” he notes. “And every program officer is looking for exciting proposals to fund.” One constraint, he says, is a heavy workload: The number of proposals submitted has risen by 47% since 2000, with little growth in staff. Managing a new program, Bement says, would leave even less time to seek out the type of research proposals that the report is calling for. Several board members say they would like the review process to place more emphasis on the investigator's ideas and track record and less on preliminary data showing that the project is feasible. “Business as usual is what people are comfortable with. We need something different,” says board chair Steven Beering. The report asks NSF to come up with a management plan by August. 6. RESEARCH FACILITIES China Supersizes Its Science 1. Xin Hao, 2. Hepeng Jia* 1. Jia Hepeng is a writer in Beijing. With reporting by Gong Yidong of China Features in Beijing. With little fanfare, China is about to spend hundreds of millions of dollars to build major science facilities for everything from crystallography to remote sensing BEIJING—As the maglev train from Pudong airport races toward Zhangjiang High-Tech Park on the outskirts of Shanghai, passengers can glimpse what looks like a giant silver nautilus on the horizon. This 36,000-square-meter spiral structure is the Shanghai Synchrotron Radiation Facility (SSRF), scheduled to come online in 2009. At an investment of 1.2 billion yuan ($150 million), SSRF is the most expensive fundamental research project China has ever undertaken—for the time being, that is. The facility, which will generate powerful x-rays for studying the structures of molecules and advanced materials, will soon be joined by another heavyweight champ. Last month, the Chinese Academy of Sciences (CAS) signed an agreement with Guangdong Province in southern China to build the $250 million China Spallation Neutron Source (CSNS). Thanks in part to a pledge to raise R&D spending from 1.3% of gross domestic product in 2005 to 2.5% by 2020 (Science, 17 March 2006, p. 1548), big science projects that have been on the drawing board for years and new concepts are fast becoming reality. The central government's National Development and Reform Commission (NDRC), the agency responsible for all major state investments, is bankrolling the construction of a dozen major facilities—to the tune of$750 million—during the 11th 5-year plan, which runs through 2010. (SSRF was launched in the previous 5-year plan.) The spallation source alone will receive $163 million from the NDRC pot, with Guangdong authorities chipping in the rest. So far, NDRC has given an official go-ahead to five of 12 projects; the seven others are expected to receive approval later this year. Proponents argue that China needs to invest in megaprojects to boost its rapid ascent as a research power. With large facilities, “we can proudly stand up in the international scientific community and no longer rely on foreign equipment to do many experiments,” says Liang Rongji, an official at the “big-science” section of CAS. However, some prominent researchers question the decision to shower such largess on machines. They say what China needs most is to build a critical mass of scientists in many disciplines to get the most out of the new facilities. “Many people are enthusiastic about building instruments, but there are not enough people to do the science,” says Gan Zizhao, a physicist at Beijing University. NDRC officials declined to provide information on any of the projects, claiming that certain details are state secrets. And most scientists declined to discuss the initiatives on the record, expressing concern that their remarks could jeopardize funding for projects not yet finalized. But from published reports and interviews with two dozen scientists and officials, Science has pieced together a picture of China's ambitious Big Science agenda (see table). Thinking big Big science holds a hallowed place in China, where top politicians often wax nostalgic about liang dan yi xing, or “two bombs and one star”: the development of the atomic and hydrogen bombs and the country's first satellite in the 1960s. “Leaders like to support megaprojects for their visibility,” says Cong Cao, an expert on Chinese science policy at the State University of New York in New York City. Megaprojects also fit China's top-down approach to research, and successful projects justify political legitimacy: Just as the nuclear weapons program in the 1960s was touted as a triumph of Mao Zedong Thought, the Beijing Electron Positron Collider (BEPC) was held up as proof of Deng Xiaoping's foresight. BEPC was the first big science project after the Cultural Revolution ended in 1976. It demonstrates how a large facility can take root in China: support at the highest level of government—Deng himself decreed that it be built; help from international advisers and collaborators; and most importantly, the work of indigenous physicists trained in nuclear weapons and particle physics who managed to turn Deng's dream of an expensive proton accelerator into a more modest but successful collider. Among its achievements, BEPC boasts the most precise measurements of the tau lepton's mass, data that have helped verify the Standard Model of particle physics. To enhance collaborations with foreign scientists, BEPC's host, the Institute of High Energy Physics (IHEP), forged China's first high-speed Internet link to the outside world—a connection between IHEP and the Stanford Linear Accelerator Center—in 1994. And BEPC has helped China develop technical capacity. “Construction of a large facility needs many novel parts and technologies,” says IHEP Director Chen Hesheng. “Mastering the process greatly improves indigenous skills.” After learning how to make accelerator parts, IHEP now exports some components to Japan, South Korea, and the United States. Buoyed by its success, BEPC has attracted friends in high places: Its VIP visitor log is a Who's Who of China's political elite. That has kept the funds flowing. In 2000, the central government approved$80 million to renovate the facility. The upgrade—called BEPC II—is scheduled for completion next year. Its power should increase 100-fold, enabling ever-finer measurements of tau lepton mass. Down the road, IHEP plans to build the $2 billion Beijing Tau-Charm Factory to extend the accelerator energy up to 3 GeV for experiments on both the tau lepton and charm quark. And the institute is participating in the International Linear Collider project. Thinking bigger China's current scientific leaders hope to steal a page from BEPC's playbook. CAS President Lu Yongxiang, who has made innovation a central theme at the academy, has often urged researchers to transition from “following what others do” to “coming up with what to do.” Nevertheless, several big science projects appear to be “me-too” facilities, including CSNS, which Lu initiated after a tour in 2000 of ISIS, a pulsed neutron source at Rutherford Appleton Laboratory near Oxford, United Kingdom. According to J. K. Zhao, a senior scientist at Oak Ridge National Laboratory in Tennessee and a consultant to CSNS, Lu was impressed by the multidisciplinary research at ISIS and asked CAS's Institute of Physics for a feasibility study of a similar facility in China. A group of neutron scientists enlisted Zhao and other overseas experts to help draft ISIS-inspired plans. CAS approved the conceptual design in 2005 and funded a preparatory team, led by Wei Jie of Brookhaven National Laboratory in Upton, New York, and IHEP, to start on an engineering design in 2006. To ensure reliability, the team will adopt mature technology whenever possible, says Fu Shinian, Wei's deputy at IHEP. The phase I design calls for a linear accelerator and rapid-cycling synchrotron to speed up protons to 1.6 GeV and deliver an initial beam power of 120 kilowatts, which can be doubled and quadrupled in future upgrades. The high-energy protons bombard a heavy-metal target, such as tungsten, to knock off a spray of neutrons. “It's like shooting the cue ball into a pile of billiard balls,” says Zhao. Each proton can shear 20 to 30 neutrons off the target, generating an intense pulse for probing microstructures of superconductors, for instance, or proteins. Phase I construction is expected to take 6 years. The mega dozen. China has set its sights on launching 12 major science facilities by 2010. View this table: Remarkably, CSNS will be built far from the traditional science strongholds of Beijing and Shanghai. Dongguan, a city in the prosperous Pearl River Delta in southern China, offered free land and$63 million toward infrastructure costs. “It's a reasonable deal for Dongguan, where strong economic growth and weak research capacity result in highly unbalanced development,” says CAS's Liang. “Many scientific talents may be attracted to the city.”

Although Zhao wonders how many senior scientists with families will relocate to Dongguan, he is confident that universities in Guangdong and Hong Kong will pack the facility with young researchers. Fu says senior scientists from Beijing will train the neophytes but acknowledges that “experts sent by IHEP alone certainly are not enough.” So CSNS will open its doors to foreign researchers. For starters, Dongguan in April will host the 18th Meeting of the International Collaboration on Advanced Neutron Sources, the first time the event will be held in China.

Chinese astronomers, meanwhile, should soon hear a decision on the amount NDRC will spend on the world's largest single radio telescope, the Five-hundred-meter Aperture Spherical Telescope (FAST). The instrument, first proposed by China's National Astronomical Observatories (NAO) in 1994, would allow astronomers to peer farther than ever before, deep into the early universe, says project director Peng Bo.

NAO chose a limestone karst depression in southwest Guizhou Province as the site to suspend a receiving dish made of 4600 triangular panels, taking advantage of the natural topography to reduce construction costs. FAST had been in the running to host the Square Kilometer Array, an international radio astronomy project that would enable astronomers to observe the formation of the early universe and test Einstein's theory of general relativity (Science, 18 August 2006, p. 910). However, last September, the international consortium narrowed the sites down to candidates in Australia and South Africa. That “won't affect the investment and construction of FAST,” says Peng. He estimates that FAST will cost at least $86 million and take 6 years or more to build. In comparison, the gestation time of the High Magnetic Field Facility has been much shorter. Scientists first proposed the big science project in autumn 2004 and received NDRC approval earlier this year. The facility will be hosted in two cities to “take advantage of the existing technical capability of each city,” says Kuang Guangli, director of CAS's Hefei High Magnetic Field Lab in Anhui Province. Hefei will be home to 35- to 40-tesla water-cooled and superconducting steady-field magnets, primarily for materials science research, such as probing the quantum Hall effect, and for magnetic resonance imaging. Wuhan, in Hubei Province, will get 50- to 80-tesla pulsed field magnets to complement an existing low-temperature lab and pulsed-power generator. Researchers there will plumb the effects of ultralow temperatures and ultrahigh magnetic fields. NDRC has allotted$48 million for the 5-year construction effort.

Thinking wider

With megaprojects sprouting like bamboo shoots after a spring rain, big science is no longer the sole province of CAS's physical sciences division. Other new entries span fields such as agriculture, biology, geology, and remote sensing.

One of the first projects NDRC finalized, in 2005, is the Meridian Space Weather Monitoring Project. It has a wide reach, from fundamental astrophysics—such as probing how solar flares and coronal mass ejections influence Earth's upper atmosphere—to monitoring near-Earth space to ensure safe satellite operation.

Run by CAS's Center for Space Science and Applied Research, the project will equip 15 ground-based observation stations with laser radars and other instruments to record temperature, air density, and electromagnetic data from above an altitude of 20 kilometers to several hundred kilometers. Its first phase will cost $25 million and is expected to take 3 years to complete. Project scientists have also proposed an International Space Weather Meridian Circle Program to link national arrays of ground-based monitors to enhance space environment observation worldwide. Other big science initiatives are less well defined. Take the estimated$125 million “protein science research facility.” At the height of the SARS outbreak in 2003, after Rao Zihe's group at Qinghua University in Beijing cracked the crystal structure of the main protease of the SARS virus using IHEP's synchrotron source, NDRC wanted to reward Rao with a major protein crystallography facility at CAS's Institute of Biophysics, where Rao had become director in 2004. But the project derailed when Rao left Beijing last year to become president of Nankai University in Tianjin. CAS Vice President Chen Zhu then proposed building the facility at Shanghai's Zhangjiang High-Tech Park, near SSRF, to take advantage of the new synchrotron source. But many biologists have said that protein science is more than crystallography and that the new center should not be so far from existing biological institutes.

The political tussle is heating up. Sources say that one faction is lobbying for the facility to be built downtown, near CAS's Shanghai Institutes of Biological Sciences. “Zhangjiang is too far to go for experiments if you study proteins in living cells,” says one researcher. Others are trying to carve off funds for existing labs in Beijing. They argue that if too much money is spent in equipping a single facility, waste is inevitable.

Indeed, researchers acknowledge that China's funding system for megaprojects has a serious flaw. It favors one-shot projects: revolution, not evolution. “In Japan or South Korea, they build synchrotron sources one beamline at a time as needed,” says Fudan University physicist Zhang Xinyi, who directed the National Synchrotron Radiation Lab (NSRL) in Hefei before moving to Shanghai. “But in China, we have to build the facility as one big project because all the funding comes in a single chunk.”

In addition, an unwritten rule sets a facility's operating budget at one-tenth the construction cost. “On many occasions, the operating budget is determined before construction begins,” says CAS's Liang. This hamstrings researchers when unforeseen problems arise. For example, when NSRL in the 1990s encountered trouble with beam stability, which thwarted experiments, its operating budget was insufficient to carry out a fix. NSRL's synchrotron source was stabilized in 2004, only after NDRC approved \$15 million for an upgrade.

Zhang wants to apply lessons learned from managing the Hefei synchrotron to the construction of the Shanghai source. He has suggested to NDRC that some money be set aside so users can bid to design their beamlines and experimental stations. However, he says NDRC preferred to stick with its usual approach to allocating money. Zhang still hopes that SSRF's management will be more flexible; it has already agreed to let him design his own beamline for research on condensed matter physics and synchrotron radiation imaging, with funding from Fudan University.

But most would-be users do not have the couple of million dollars for this kind of instrument work. IHEP's Xian Dingchang, one of China's foremost synchrotron experts, says that letting major users participate in construction and management is popular in foreign synchrotron labs, but “our system does not allow it. Organizations want to control all the resources they've got.”

Xian hopes that China's spallation neutron source can do better. According to Fu, CSNS has involved users from the beginning and modified designs after input from large user meetings. “We also learned lessons about the operating budget from previous projects,” says Fu, who intends to argue for more operation and maintenance funds from the central government. Guangdong Province already has promised some help.

Some Chinese scientists criticize the government's funding system for prizing machines over people. Ultimately, however, it is what scientists achieve with CSNS and its big brethren that will prove the wisdom of China's big investment in Big Science.

7. RETROVIRUS MEETING

Hope on New AIDS Drugs, but Breast-Feeding Strategy Backfires

1. Jon Cohen

Two novel drugs show promise in hard-to-treat patients, but breast-feeding studies underscore the difficulty of applying advances in the real world

LOS ANGELES, CALIFORNIA—In the arcade game Whac-a-Mole, mallet-wielding players pound moles into the ground only to have the buck-toothed pests pop out of another hole. HIV has a lot in common with those moles. Although research has shown that breast-feeding carries a significant risk of transmitting HIV from an infected mother to her baby, new studies presented here last week at the largest annual U.S. AIDS meeting* highlight the dangers that alternative strategies such as infant formula present in poor countries. Similarly, talks at the meeting underscored that despite much success in increasing access to anti-HIV drugs, great disparities still exist between the developed and the developing world. But there was one bright spot: Data on two new medications indicate that they can knock down HIVs that have become resistant to current drugs and are popping back up.

The most dramatic talk on alternatives to breast-feeding focused on Botswana, where the government advises all HIV-infected women to use infant formula to avoid transmitting the virus. (The World Health Organization, by contrast, recommends that HIV-infected women use formula only when it is “acceptable, feasible, affordable, sustainable, and safe.”) Nearly two-thirds of infected mothers in Botswana now use formula, which is provided free by the government. But that policy appears to have had a tragic downside. Researchers studying an especially deadly outbreak of diarrheal diseases last year reported that infant formula, as compared to breast-feeding, increased a child's risk of death from diarrheal disease 50 times, likely because a severe flood contaminated the water used to make the formula.

From January to March 2006, 532 children under 5 in the country died from diarrhea, up from 21 the year before. Medical epidemiologist Tracy Creek of the U.S. Centers for Disease Control and Prevention in Atlanta, Georgia, and colleagues studied a cohort of 153 Botswanan children hospitalized for diarrhea. Of these, 33 died, and all but one were formula-fed. The HIV-infected status of the mother and the infant was not associated with any of the deaths. Stool samples showed widespread infection with cryptosporidium, Escherichia coli, and salmonella. “It's a stunning story,” says pediatrician Hoosen Coovadia of the University of KwaZulu-Natal in South Africa.

Creek's team has advised Botswana to reevaluate its universal formula policy, stressing that the risk of viral transmission from breast-feeding must be balanced against the risks of formula use in such settings. But James McIntyre, an obstetrician/gynecologist at Baragwanath Hospital in Soweto, South Africa, cautioned that researchers and policymakers should not lose sight of the real goal: to make both feeding approaches safer. Several studies under way are evaluating a myriad of strategies to do just that, including distributing chlorination solutions and using anti-HIV drugs in mothers and uninfected babies more aggressively. But a Zambian study presented at the meeting found that one seemingly logical way to make breast-feeding safer, early weaning, offered no benefit. The study, which compared abrupt weaning at 4 months to breast-feeding for an average of 16 months, found an equal number of HIV infections and deaths by 2 years of age.

The most promising news at the meeting came from reports of large efficacy trials of two AIDS drugs, slated for approval as early as this year, that have novel targets. One of the drugs, raltegravir, made by Merck & Co. in Rahway, New Jersey, cripples HIV's integrase enzyme. In studies of nearly 800 patients who had viruses that were highly resistant to existing drugs, about 60% had undetectable levels of HIV in their blood after 24 weeks of treatment with raltegravir and an “optimized” cocktail of other drugs. In the control group, which received just the optimized regimens, viral loads became undetectable in roughly 30%.

The second drug, maraviroc, targets human immune cells rather than the virus. HIV enters cells by attaching to the CD4 receptor and one of two chemokine receptors, CCR5 or CXCR4. Maraviroc, made by Pfizer in New York City, gums up CCR5 to prevent HIV from binding to it. In studies of more than 500 people infected with HIVs that prefer CCR5 receptors, maraviroc drove viral levels down to undetectable in 48.5% of the treated group versus 24.6% in a control group that received optimized regimens alone.

David Cooper of the University of New South Wales in Sydney, Australia, who has tested both drugs, says if their potency and safety hold up, clinicians should reevaluate when to start treatment. Current recommendations encourage delaying treatment until serious immune damage occurs, in part to reduce toxic effects. But the tradeoff is that people suffer from the prolonged immune stimulation that HIV triggers, which Cooper says is likely responsible for the increased number of deaths from heart, liver, and kidney disease. “If you start people earlier, you might have a significant impact on all of these,” he says.

For the vast majority of HIV-infected people who live in poor countries, however, it may be many years before maraviroc and raltegravir offer any relief. Matthias Egger of the University of Berne in Switzerland and colleagues reported that 59 different drug regimens are available to most AIDS patients in North America, but only three are typically available in Asia and Africa. Solve one problem, another surfaces.

• * 14th Conference on Retroviruses and Opportunistic Infections, 25–28 February, Los Angeles, California.

8. EVOLUTION

Jurassic Genome

1. Carl Zimmer*
1. Carl Zimmer's latest book, on E. coli and the meaning of life, will be published next spring.

Dinosaur fossils are helping scientists tease apart why the sizes of genomes vary so dramatically among species

Tyrannosaurus rex, it turns out, had a pretty small genome. A team of American and British scientists estimates that it contained a relatively puny 1.9 billion base pairs of DNA, a little over half the size of our own genome.

The scientists who came up with this estimate—along with estimates for the genomes of 30 other dinosaur species—had no ancient DNA to study. T. rex, after all, became extinct 65 million years ago, and its genome is long gone. Instead, they discovered a revealing correlation: Big genomes tend to be found in animals with big bone cells. By comparing the size of cells in dinosaur fossils to those of living animals, the scientists got statistically sound estimates for the sizes of the dinosaur genomes.

The findings, published by Nature this week, are more than just a curiosity. Chris Organ, a Harvard University paleontologist and the lead author of the new paper, says the estimates shed new light on a big puzzle: Why do the genomes of living species come in such a staggering range of sizes, varying more than 3000-fold in animals? A fruit fly's genome is 350 times smaller than ours, whereas the marbled lungfish genome is 37 times bigger. Recently, some large-scale comparisons of genome sizes have suggested that natural selection may favor big genomes in some species and small genomes in others. But some skeptics argue that genome size may not be adaptive at all. Now, with the advent of what Organ likes to call “dinogenomics,” scientists can begin to tease out some answers by adding extinct species to the emerging picture of genome evolution.

The new study will have its most direct impact on tracing the evolution of bird genomes. “Birds are dinosaurs; they're the last vestige,” says Organ. Scientists have long noted that birds have small genomes compared to reptiles, their closest living relatives, but it was unclear how and when that change occurred. Organ's study suggests that the dinosaur ancestors of birds had evolved small genomes long before birds took to the sky. “I think it's very exciting,” says T. Ryan Gregory, an expert on genome size at the University of Guelph in Canada. “It's the kind of paper we've needed for a long time.”

Giant genomes in lowly creatures

The wide array of genome sizes startled scientists when it came to light in the early 1950s. Until then, the prevailing wisdom had been that complex animals needed bigger genomes than simple ones needed. And yet, as one paper explained, a salamander's genome “contains 70 times as much DNA as is found in a cell of the domestic fowl, a far more highly developed animal.” As researchers sized up more genomes, the paradox grew deeper. Some single-celled protozoans turned out to have bigger genomes than humans. The genome of Gonyaulax polyhedra, for example, is 28 times the size of ours.

View this table:

A solution of sorts emerged in the 1970s: so-called junk DNA. In addition to protein-coding genes, genomes contain stretches of DNA that encode RNA molecules or are just vestiges of old genes. Many genomes, including our own, are dominated by viruslike sequences of DNA called mobile elements that can make new copies of themselves that get inserted in new spots in the same genome. The human genome is 98.5% noncoding DNA.

Comparing the genomes of living species, scientists have found that genomes can expand and shrink quickly, with mobile elements spreading like a genomic plague. The cotton genome, for example, tripled in size over the past 5 million to 10 million years. On the other hand, copying errors can cause cells to snip out large chunks of noncoding DNA by accident, shrinking their genomes in the process.

To test whether natural selection plays a strong role in determining the size of a species' genome, scientists have compared a wide range of species, searching for correlations between genome size and other traits that might be adaptive. Finding these correlations has been difficult, however, because relatively few genomes had been measured until recently, and many of those measurements turned out to be wrong. Genome sizes are easy to misjudge, even with modern genome sequencing methods. When scientists sequence a genome, they generally break it up into fragments and then try to piece them together like a puzzle. Noncoding DNA is loaded with repeating sequences, which are difficult to reassemble properly.

Things are improving, says Gregory. New techniques are enabling more-precise measurements—for instance, scientists are adding DNA-staining compounds to cells and then using image-processing software to analyze the amount of stain. And the results of these studies are now being stored in online databases, making possible large-scale comparisons. Gregory maintains a database of animal genome sizes at the University of Guelph (genomesize.com), Kew Gardens biologists manage one for plants and algae (www.kew.org/genomesize/homepage.html), and biologists at the Estonian University of Life Sciences run a database for fungi (www.zbi.ee/fungal-genomesize). Together, the databases contain information on more than 10,000 species.

One of the first correlations scientists noticed was between the size of genomes and the size of cells. It cropped up in a study on red blood cells in vertebrates. Later studies also found a link between cell size and genome size in other groups of species, such as plants and protozoans, and in other types of cells in vertebrates, although not all.

Some scientists have argued that natural selection favors big or small genomes because they produce big or small cells. Take the case of Trichomonas vaginalis, a sexually transmitted protozoan that lives in the human vagina. When a multi-institute group led by Jane Carlton, who is now at New York University, published the organism's genome in the 12 January issue of Science (p. 207), they observed that T. vaginalis is padded with far more mobile elements than are found in related protozoans that live elsewhere in the body. The scientists suggest that when T. vaginalis moved into its current ecological niche, its genome expanded rapidly. The protozoan itself became bigger as a result, which made it more effective at chasing and engulfing its bacterial prey.

Changing cell size may benefit other kinds of species in other ways. In some groups of animals, species with high metabolic rates tend to have small genomes, for example, whereas species with slow metabolisms have big ones. One possible explanation is that small genomes give rise to small blood cells, which have a high surface-to-volume ratio and can transport oxygen faster across their membranes. If a warm-blooded animal needs to use a lot of oxygen to fuel its metabolism, a small genome might give it an evolutionary edge.

The fossil record

Consistent with this hypothesis, birds have much smaller genomes than those of their reptilian relatives. But if birds evolved smaller genomes for their high metabolism, the question naturally arises, when did that shrinkage take place? Organ realized that dinosaur fossils might hold the answer.

Some dinosaur fossils are so well preserved that they still have the cavities that once held their bone cells (known as osteocytes). But no one had ever established a link between genome size and osteocyte size. “That was our first step,” says Organ. They examined bones from 26 species of birds, reptiles, mammals, and amphibians. With colleagues at Harvard and the University of Reading, U.K., he mapped his measurements onto an evolutionary tree. The correlation was good enough that they could use the size of a species' osteocytes to accurately predict its genome size. The scientists then added to the tree branches for 31 species of dinosaurs and used the size of their osteocytes to estimate the size of their genomes. From that information, they inferred how the size of dinosaur genomes had evolved over time.

Their analysis suggests that the common ancestor of dinosaurs, a small four-footed reptile that lived about 230 million years ago, had a relatively big genome about the same size as an alligator's. That common ancestor gave rise to several major branches of dinosaurs. One of those branches, the ornithischians, included big herbivores such as stegosaurs and Triceratops. Their genomes did not change much. “These guys have a typical reptilian-sized genome,” says Organ.

But another branch of dinosaurs—bipedal predators known as theropods—evolved significantly smaller genomes. Theropods would ultimately give rise to birds. “This blows out of the water the idea that small genomes coevolved with flight,” says Organ.

Organ suggests that theropods evolved to have higher metabolic rates than other dinosaurs had, and as a result, natural selection favored smaller genomes and smaller cells. Other paleontologists have also found evidence for bird biology in bipedal dinosaurs, including feathers, rapid growth, and nesting behavior. “You don't decide you're going to fly and be warm-blooded like a bird and then make all these changes,” says Organ. “They're all small cumulative things that go way, way back, and they come together to produce this end form.”

Although Gregory and others praise Organ's paper, some scientists are not as impressed. “It's a cute paper, but I'm not terribly confident in the outcome,” says Michael Lynch of Indiana University, Bloomington, who questions whether natural selection is responsible for driving genomes to different sizes to fine-tune metabolism. “There's a correlation of the two, but I don't know of any direct demonstration of causality.”

Gregory concedes that even if metabolism can account for the small genomes of animals such as dinosaurs and birds, it won't explain all the patterns scientists find. Plants, for example, have a similar correlation between cell size and genome size, for example, but they don't have an animal-like metabolism. It's possible that plants have different genome sizes because genome size changes the way their cells capture sunlight or transport fluids. “Any one feature isn't really going to cover it,” Gregory says. “You have to look from the bottom up and the top down in every case.”

9. NEUROSCIENCE

Hunting for Meaning After Midnight

1. Greg Miller

The brain is anything but quiet during sleep. Is it making memories, searching for insight, or up to something else entirely?

Even sound sleepers have restless brains. Your body may be largely motionless once your head hits the pillow, but inside your skull, millions of neurons are busily firing away, often in synchronized bursts that send waves of electricity sweeping across the surface of your brain.

What all this neural activity accomplishes, if anything, is a mystery, and part of the even larger puzzle of why we sleep. One idea that has gained favor in recent years is that during certain stages of sleep, the brain replays experiences from the day to strengthen the memory of what happened. Support for this notion comes from a variety of experiments with rodents and people, including a new study in this issue suggesting that boosting such memory-related activity in the sleeping brain can improve memory performance in humans.

Some researchers suspect that replaying the recent past during sleep is more than a memory aid. This review may also give the brain a chance to catch important information it missed the first time around. “There's more and more evidence accruing that what we're seeing during sleep is not just a strengthening of memories,” says Robert Stickgold, a neuroscientist at Harvard Medical School in Boston. “What the brain is really trying to do is extract meaning.”

Such ideas aren't universally accepted, however. One new and controversial hypothesis suggests that memory and other cognitive benefits are merely side effects of the true function of sleep: dialing down synapses that have gotten overexcited by daytime activity. And some skeptics aren't convinced that sleep has anything to do with memory at all. Given all the uncertainties, researchers say the quest to understand the sleeping brain is just beginning. But they're already finding fascinating clues about what happens when we're off in the Land of Nod.

Let me see that again

The first experimental evidence that the brain replays recent experiences during sleep came from experiments with rats begun in the 1990s by neuroscientist Bruce McNaughton of the University of Arizona, Tucson, and his graduate student Matthew Wilson. McNaughton and Wilson recorded the electrical activity of neurons called “place cells” in the rat hippocampus. These neurons have an affinity for particular locations, so that as a rat runs around its enclosure, a given place cell fires each time the rodent passes through that cell's favorite spot. Because individual place cells respond only to a specific location, each time a rat takes a different route, a different sequence of place cells fires. Subsequent studies found that sequences of place-cell firing that occur as a rat explores a new environment are replayed the next time the rat dozes, as if the animal retraces its steps during sleep.

Humans may do something similar. In 2004, a research team led by Pierre Maquet of the University of Liège, Belgium, used positron emission tomography (PET) to monitor brain activity in men playing a virtual-reality game in which they learned to navigate through a virtual town (actually a scene from the shoot-'em-up video game Duke Nukem). The same regions of the hippocampus that revved up when the subjects explored the virtual environment also became active when the men slipped into slow-wave sleep that night. This sleep stage is often considered the deepest: Slow-wave sleepers are hard to rouse. During this sleep stage, electroencephalogram (EEG) traces show waves of electrical activity throughout the brain that peak about once a second. PET scans revealed that the more intense hippocampal activity a volunteer had during slow-wave sleep, the better he performed the next day when he sprinted through the virtual town to find certain objects as quickly as possible, Maquet and colleagues reported in the 28 October 2004 issue of Neuron.

Maquet's findings showed a nice correlation between neural activity during sleep and subsequent memory performance, says Jan Born, a neuroscientist at the University of Lübeck in Germany. The natural next step, Born says, was to see whether artificially boosting memory-related neural activity during sleep could improve memory performance. On page 1426, Born's team describes an attempt to do just that.

The researchers had volunteers play a video version of the card game Memory (also known as Concentration), in which they had to learn and remember the locations of pairs of cards bearing the same image in a group of 30 cards. Each matched pair flashed on the screen for a few seconds, one at a time, with all the other cards facing down. After the volunteers had seen the locations of all the pairs, the researchers tested the subjects' recall by turning one of the 30 cards face up and asking them to find its match. The researchers then used EEG electrodes to monitor the volunteers' brain activity while they slept.

Once the volunteers entered slow-wave sleep, the researchers gave some of them a puff of rose-scented air. They'd previously given some of the subjects a whiff of rose during their initial training session with the cards, reasoning that the odor would reactivate memories of the training session in these subjects without waking them. Indeed, functional magnetic resonance imaging (fMRI) scans in sleeping subjects revealed that the odor activated the hippocampus in those who had experienced it previously, even though the EEG showed no disruptions in the subjects' slumber. Although they didn't remember smelling roses in their sleep, the subjects who got the fragrant prompt remembered the matched pairs better the next day, getting 97% correct compared to 86% for subjects who'd received no odor while sleeping. Subjects who got the rose odor either while awake or while in REM sleep, on the other hand, showed no memory boost; nor did presenting the odor during slow-wave sleep help subjects who hadn't been exposed to rose during the training session. “It's the first study to really demonstrate that one can influence memory with stimuli that explicitly activate the hippocampus during sleep,” says Wilson, now an associate professor at the Massachusetts Institute of Technology in Cambridge.

Born's findings fit with a popular view of how the brain files memories away for long-term storage, a process neuroscientists call memory consolidation. According to this hypothesis, memories are first encoded by the hippocampus and later—perhaps in a matter of hours or days—transferred for long-term storage to the cerebral cortex, or neocortex. Several lines of evidence support this scenario, among them the observation that many people who suffer amnesia following damage to the hippocampus can still recall events and facts learned prior to their injury even though they're unable to form new memories. In such patients, old memories must reside somewhere other than the hippocampus. How the brain might transfer memories from the hippocampus to the neocortex isn't known, but it's assumed to require some kind of back-and-forth communication between the two structures.

A study by Wilson and postdoctoral fellow Daoyun Ji published in the January issue of Nature Neuroscience supports this idea. Wilson and Ji found that, much like hippocampal place cells, neurons in the visual cortex of rats replay firing sequences during slow-wave sleep that match their activity during the rats' daytime maze running. Moreover, the scientists found that the replay in the visual cortex happens in lockstep with replay in the hippocampus. “It's the first time we see sequences both in the hippocampus and the neocortex and their coordination in time,” says Maquet.

More than memory

Beyond simply fortifying memories, the brain may be sifting through recent experiences during sleep to identify rules about cause and effect or other useful patterns, some researchers suspect. One of the first hints of this phenomenon came from a 2004 Nature paper by Born's group. They reported that people playing a game that required manipulating strings of numbers were more than twice as likely to have a flash of insight that enabled them to solve the problem more quickly after a night of sleep than after a similar time of wakefulness.

More recently, at a sleep research meeting last month at the Salk Institute for Biological Studies, Stickgold presented findings suggesting that people find missing connections while they sleep. His group had volunteers play a card game in which they attempted to predict whether it would “rain” based on cards the researchers had shown them. The game was rigged so that the card with the diamonds, for instance, was followed by sunny weather 80% of the time. Each card had its own rule, but the volunteers did not know the rules even existed. As expected, they did no better than chance at first. But after 200 predictions, their success rates improved. When the subjects came back 12 hours later to try again, they had improved even more—but those who'd slept improved about 10% more than those who hadn't. Although it's a modest improvement, “there's a growing sense that there's active learning during sleep,” says Wilson.

There's also increasing evidence that different stages of sleep are involved in consolidating different kinds of memory, says Matthew Walker, a neuroscientist at Harvard Medical School. Spatial memories, like those formed by playing Memory or by navigating through a maze or virtual town, seem to be consolidated during slow-wave sleep. The same appears to be true for declarative memories, which involve remembering facts—but not necessarily other kinds of memory. Some studies have found that the brain processes memories with a strong emotional component during rapid-eye-movement (REM) sleep and processes memory for motor skills, such as tapping out a difficult sequence on a keyboard, during stage 2 and REM sleep (see diagram). Why this division of labor exists is a puzzle, but Walker and others speculate that the different physiological and neurochemical milieus associated with different sleep stages may favor certain kinds of neural plasticity.

Some researchers point out, however, that the literature on which sleep stages relate to which types of memory is peppered with contradictions. “There is inconsistency here, and someone has to be wrong,” says Jerome Siegel, a neuroscientist at the University of California, Los Angeles. Stickgold and other proponents of a sleep-memory link acknowledge that they face many unresolved issues about the role of sleep in memory consolidation. “There are massive questions remaining about how extensive it is, how important it is, exactly which stages of sleep affect which types of memory,” Stickgold says.

Another wide-open issue is whether there's a link between dreaming and memory-related neural activity during sleep. The kind of direct replay of recent experience suggested by Wilson's work, for example, doesn't seem to be the stuff of dreams. Stickgold's group has found that only 1% to 2% of episodes from dreams reflect events from the previous day. If dreams don't directly reflect the memory-consolidation process, what do they reflect? “We're in no man's land,” says Walker.

Another hypothesis

Not everyone is onboard with the idea that brain activity during sleep is primarily about replaying recent experiences. Giulio Tononi, a neuroscientist at the University of Wisconsin, Madison, has recently advanced a very different hypothesis. He proposes that the purpose of sleep, at least as far as the brain is concerned, is to weaken neural connections throughout the brain.

In Tononi's view, the synaptic connections between neurons get progressively stronger during the day as a result of long-term potentiation (LTP), a physiological process by which neurons that fire at the same time strengthen their connections with each other. Most neuroscientists consider LTP a major mechanism of neural plasticity—and therefore of learning and memory (Science, 22 December 2006, p. 1854). Yet a day's worth of LTP can be too much of a good thing, Tononi contends. Stronger synapses increase the brain's energy needs—a serious concern for an energy-hogging organ that already accounts for 20% of a person's metabolism. Stronger synapses are also bigger, taking up precious space. And finally, too much LTP may saturate synapses, leaving them unable to get any stronger when the brain needs to learn something new.

Sleep restores homeostasis by ratcheting down synaptic strengths, Tononi argues. It's a far more important service than providing a modest boost in memory performance, he says. “Sleep is too high a price to pay for the 15% improvement we see in some things,” Tononi says. “I think it does something much more fundamental to the neuron: It's the price we pay for plasticity.” How might sleep reset synaptic strengths? One clue, Tononi says, comes from a study by Swiss researchers published in the 15 January Journal of Physiology. They reported that stimulating slices of rat brain to cause once-per-second bursts of neural firing induces a type of synaptic weakening called long-term depression (LTD). Tononi thinks it's no coincidence that the coordinated neural firing during slow-wave sleep happens at this same frequency. The slow waves, one per second, could induce LTD to dial down synapses that got too strong during the day.

Other evidence comes from human studies, including one from Tononi's group that found that slow waves measured by EEG in sleeping subjects were most intense in brain regions involved in learning an arm-movement task the previous day. That's consistent with the idea that the slow waves happen where they're needed most to restore synaptic homeostasis. Conversely, immobilizing a person's arm in a sling decreases slow-wave intensity in arm-related areas of neocortex the following night, Tononi and colleagues reported in the September 2006 issue of Nature Neuroscience.

Tononi also points to a paper Walker's group published in Nature Neuroscience in January. Those researchers found that undergraduate volunteers deprived of a night's sleep were less able than well-rested peers to learn new word pairs the next day. (fMRI scans suggested that the deficit was specifically related to memory—brain regions that modify attention and alertness functioned normally in the sleep-deprived undergrads.) With no slow-wave sleep to dampen their synapses, the subjects were unable to learn as well the next day, Tononi says.

Tononi's view of the role of sleep is “an interesting and intuitive idea,” Walker says. He also thinks it's not necessarily incompatible with the notion that sleep enhances memory by strengthening the underlying synapses, as proponents of the memory-replay scenario have generally assumed. “I think they could act not just independently but synergistically.”

But not everyone is ready to embrace Tononi's proposal. Says Stickgold: “It's an elegant hypothesis that doesn't have a lot of data behind it.”

Bah, humbug

Those advancing the links between sleep and memory have other hurdles to overcome. One of the most disconcerting inconsistencies in the sleep-memory literature, Siegel explains, is that studies of total sleep deprivation have failed to find a deleterious effect on declarative memory. Like Tononi, Siegel also questions whether relatively modest memory benefits offer enough of an adaptive advantage to compensate for being unresponsive for hours a day. “I'm just not convinced that there is any connection at all” between sleep and memory, Siegel says. He favors the idea that sleep evolved to help animals conserve energy for their entire bodies and to prevent them from being active at times when they're less likely to find food and more likely to be eaten.

To be sure, memory is not the only function of sleep, counters Stickgold, but the evidence that it is one function of sleep, at least in mammals, is too great to ignore. Once sleep evolved, he says, evolution figured out how to make that downtime as productive as possible. Stickgold also argues that the modest improvements typically seen in sleep-memory studies are nothing to yawn at. He is fond of pointing out that a 15% gap in performance is the difference between winning the Boston Marathon and coming in 3000th. In competitive circumstances, small advantages can make a big difference. But whatever you do, try not to lose any sleep over it.