News this Week

Science  01 Jul 2005:
Vol. 309, Issue 5731, pp. 28

    ITER Finds a Home--With a Whopping Mortgage

    1. Daniel Clery,
    2. Dennis Normile*
    1. With reporting by Gong Yidong of China Features in Beijing and Andrey Allakhverdov in Moscow.

    After a year and a half of tense diplomacy and secret discussions, an international fusion research collaboration has finally chosen a site for the world's most expensive science experiment. Meeting in Moscow this week, ministers from China, the European Union (E.U.), Japan, Russia, South Korea, and the United States announced that Cadarache, in southern France, has been chosen as the location of the International Thermonuclear Experimental Reactor (ITER).

    “I'm extremely pleased,” says Jean Jacquinot, former head of the Cadarache, fusion lab and now science adviser to France's high commissioner for atomic energy, “not because it is Cadarache, but because the whole community can now get together and build something.”

    Japan, after standing firm against foreign opposition, in the end may have surrendered to internal pressure to give up its desire to be ITER's host. Observers speculate that the Ministry of Finance, seeking to rein in Japan's deficit spending, may have balked at the price tag, about $2.5 billion for the host country. In return for the withdrawal of the Japanese site, companies in Japan will get substantial E.U. procurement contracts, and European money will help build a major research center in Japan. The choice of Cadarache “is disappointing,” says plasma physicist Kenro Miyamoto, a professor emeritus at the University of Tokyo, “but it's preferable to having the project fall apart.”

    ITER aims to recreate the sun's power on Earth. Using intense magnetic fields to hold hydrogen isotopes at enormous temperature and pressure, it would produce a flood of energy as the isotopes fuse to form larger nuclei. Originally proposed at a U.S.-Soviet summit in 1985, the ITER design was essentially complete in 2001, but when the six partners gathered in Washington, D.C., in December 2003 to pick between two candidate sites, South Korea and the United States supported Rokkasho in northern Japan, whereas Russia and China backed the E.U.'s candidate at Cadarache (Science, 2 January 2004, p. 22).

    Joining forces.

    The E.U.'s Janez Potocnik (left) helps Japan's Nariaki Nakayama sign on the dotted line in Moscow.


    Further technical studies failed to resolve the impasse. Some Europeans accused U.S. officials of favoring Japan because, unlike France, it had supported the U.S.-led invasion of Iraq. The logjam began to move in April this year when E.U. research commissioner Janez Potocnik visited Tokyo; negotiations continued during a visit by Japanese Prime Minister Junichiro Koizumi to Luxembourg in May. The two rivals for host agreed on a deal guaranteeing certain concessions to the loser (Science, 13 May, p. 934). All that remained was for one side to back down. This week, Japan graciously removed Rokkasho from the running.

    As expected, the E.U. will pay for 50% of ITER's $5 billion construction price tag. The other five partners will contribute 10% each as payments in kind. As a consolation to Japan, the E.U. will place some of its industrial contracts with Japanese companies so that Japan will end up building 20% of the reactor. Japanese researchers will make up 20% of the staff of the ITER organization, and the E.U. agreed to support a Japanese candidate for director general. Some headquarters functions will also be sited in Japan, and the E.U. promised to back Japan as a host for any subsequent commercial prototype reactor.

    Japan will also get to host an extra research center to speed work toward commercial fusion reactors. Japan can choose from a list, drawn up by the six partners, that features a high-energy neutron source for materials testing, a fusion technology center, a computer simulation lab, and an upgrade of Japan's existing JT-60 fusion reactor. To pay for the center, the E.U. and Japan will contribute up to $800 million more than the normal ITER budget. “Japan will serve as what you could call a quasi-host country for the ITER project,” Japan's science minister, Nariaki Nakayama, told a press conference today. “Through the [extra facility], we will become a base for international research and development in fusion energy equal in importance to the E.U.”

    Other partners, particularly South Korea and China, are less enamored with the deal. Luo Delong, an official with China's Ministry of Science and Technology, says that “more discussion is needed on the issues of the ITER director and the additional research facility.”

    European fusion researchers are delighted with the result. “Everyone is very happy,” says Alex Bradshaw, scientific director of the Max Planck Institute for Plasma Physics in Garching/Greifswald, Germany, and chair of Germany's fusion research program. But some researchers are wondering whether, considering the final deal, it wouldn't have been better to be the loser—especially because France seems to be getting the whole pie, with slim pickings for other E.U. countries. There are also worries that little will be left for fusion research supporting ITER if the European research budget shrinks (Science, 24 June, p. 1848). “It is essential to keep other activities going, or no one from Europe will be around to use ITER” in 10 years' time, says Bradshaw.

    For now, however, there's a palpable sense of relief after 18 months of wrangling. “I will certainly be quite happy to share a glass with my European colleagues,” says France's Jacquinot.


    Solar-Sail Enthusiasts Say Mission Lost, Possibly in Space

    1. Daniel Clery

    Cosmos 1, a privately funded spacecraft that aimed to demonstrate solar sailing for the first time, appears never to have had a chance to unfurl its sails. But staff from the Pasadena, California-based Planetary Society, the nonprofit organization running the project, say tantalizing messages ground controllers received shortly after the craft's launch on 21 June hint that it might have made it into orbit. “We're hanging in there,” says project director Louis Friedman. “But it's an increasingly dim hope.”

    Officials from the Russian Space Agency (RKA), which launched the spacecraft on board a converted ICBM from a submarine in the Barents Sea, believe the rocket's first stage failed, causing launcher and payload to crash into the sea. The plan was for the Volna rocket to lift Cosmos 1 into an 825-kilometer-high orbit. There researchers would have inflated booms to spread eight solar sails made of ultralight reflective Mylar, designed to show that the pressure of sunlight could slowly push Cosmos 1 into a higher orbit. The main space agencies hope to use solar sails to reach parts of the solar system inaccessible to chemical rockets (Science, 17 June, p. 1737). An earlier demonstration by the Planetary Society, also called Cosmos 1, failed on launch in 2001.

    Although RKA's launch telemetry suggested a booster failure, some tracking stations along the planned orbit picked up signals that seemed to come from Cosmos 1. Researchers from Russia's Space Research Institute in Moscow continue to listen for the craft and are sending commands to turn on its transmitter. Even if Cosmos 1 did reach space, Friedman says, “it would be in a very low orbit and probably decayed quickly.” Still, Friedman says, “it would be nice to know the spacecraft worked.”

    Friedman says the Planetary Society is talking to the mission's main sponsor, the entertainment company Cosmos Studios, and others about mounting another attempt. “We can still advance this whole thing,” he says. But after two failed attempts, “we'll never use a Volna again.”

  3. U.S. BUDGET

    House 'Peer Review' Kills Two NIH Grants

    1. Jocelyn Kaiser

    For the second year in a row, the House of Representatives has voted to cancel two federally funded psychology grants. A last-minute amendment to a spending bill would bar the National Institutes of Health (NIH) from giving any money in 2006 to the projects, one a study of marriage and the other research on visual perception in pigeons. The grants total $644,000 a year and are scheduled to run until 2008 and 2009.

    The amendment was offered by Representative Randy Neugebauer (R-TX), who last year won a similar victory involving two other grants, although his efforts were later rejected in a conference with the Senate. Researchers are hoping the Senate will come to the rescue again this year.

    Neugebauer says that he is correcting skewed priorities at the National Institute of Mental Health (NIMH), in particular, the institute's “fail[ure] to give a high priority to research on serious mental illnesses.” But NIH officials and scientific societies say he's meddling in a grantsmaking process that is the envy of the world. In a statement before the vote, NIH Director Elias Zerhouni called the amendment “unjustified scientific censorship which undermines the historical strength of American science.”

    Some House Republicans have been scrutinizing NIH's portfolio for the last few years and in 2003 almost killed several grants studying sexual behavior. Neugebauer's concerns echo the arguments of longtime NIMH critic E. Fuller Torrey, a psychiatrist who contends that the agency should spend more on diseases such as depression and schizophrenia. Last year's vote was aimed at two NIMH psychology grants that had already ended, so the effect would have been symbolic (Science, 17 September 2004, p. 1688).

    For the birds?

    House lawmakers nixed a grant on perception research involving pigeons, long used in studies such as this B. F. Skinner experiment on operant conditioning.


    This year, the vote could have a real impact, and it came as a rude shock to the two principal investigators involved. “I'm disappointed that peer review is being undermined,” says Sandra Murray of the University at Buffalo in New York, who received $345,161 from NIMH in 2005 and is expecting an equivalent amount each year through early 2009. Murray has so far enrolled 120 newlywed couples—the target is 225—in a study of factors that contribute to stable marriage and to divorce, which, she notes, “has a huge societal cost.” Her study will also look at mental illnesses, she says. Neugebauer says funds for “research on happiness” would be better spent on new treatments for depression.

    The second grant, to Edward Wasserman of the University of Iowa in Iowa City, continues his 14-year investigation of visual perception and cognition in pigeons. The study, slated to receive $298,688 a year through mid-2008, sheds light on “how the human brain works” and could help develop therapies for mental and developmental disorders, Wasserman says. Neugebauer, however, questions whether it “would have any value for understanding mental illnesses.”

    The American Psychological Association and the Association of American Medical Colleges were part of a coalition that tried last week to quash the amendment, sending a flurry of letters to lawmakers. Several Democrats also opposed the cancellation, with Iowa Representative James Leach warning his colleagues belatedly that setting “a precedent of political 'seers' overriding scientific peers … is a slippery slope.” The Neugebauer language passed as part of a set of amendments that were not debated on the floor, and no vote count was recorded.

    Observers expect this year's effort by Neugebauer to be deleted (as was the case last year) when the House and Senate meet to reconcile differences in the two bills. Still, says NIMH Director Thomas Insel, “this is really unfortunate. It adds a congressional veto to the process of peer review.” Adds lobbyist Patrick White of the Association of American Universities, “Our community has got to wake up on this. … We have a serious problem, and it's not going away.”

  4. 2006 FUNDING

    Senate Squeezes NSF's Budget

    1. Jeffrey Mervis,
    2. Charles Seife

    It's crunch time for the National Science Foundation (NSF). Last week, a Senate spending panel voted less money for the agency than even the president's stingy request. It delivered bleak news to backers of a proposed high-energy physics experiment at Brookhaven National Laboratory in Upton, New York. And, in a last-minute reversal, the panel restricted the agency's ability to strike the best deal on the icebreaking services needed to ferry scientists into the polar regions.

    These developments are part of NSF's budget for the 2006 fiscal year that begins on 1 October. In February, the White House had requested a 2.5% budget boost, to $5.6 billion, and on 16 June the House of Representatives approved an increase of 3.1%. But the Senate panel voted a mere 1% bump. The two bills must be reconciled later this summer. “We live in hope that we'll end up better than we are now. But we know it's a tough year,” says NSF Director Arden Bement.

    The Senate panel did single out a few programs for special attention, including adding $6 million to the $94 million plant genome program and a similar amount for the Experimental Program to Stimulate Competitive Research to bolster 25 research-poor states. It also pumped up the $47 million operating budget of the National Radio Astronomy Observatories by $4 million.

    Tough sailing.

    How to find and pay for icebreaking services is one of many problems facing NSF in 2006.


    The Senate took a harder line than did the House on NSF's $841 million education directorate, which the president had proposed cutting by $104 million. The House added back $70 million, while the Senate panel restored only $10 million. Of that, $4 million would go to a 4-year-old program linking universities and local school districts to improve student achievement that the president and the House want to shift to the Department of Education. It's seen as a marker for the Senate to lobby for retention of NSF's program.

    The Senate panel took a whack at the Rare Symmetry Violating Processes (RSVP) project, a high-energy physics experiment at Brookhaven National Laboratory that would look for effects beyond the Standard Model. Citing cost estimates far beyond an initial $158 million projection, the panel withheld not only the $42 million requested in 2006 for construction but also another $14 million given to RSVP planners but not yet spent. The appropriators also told NSF that any revised version of the project would have to go back to square one in a lengthy approval process.

    Finally, the senators sided with the U.S. Coast Guard in ongoing negotiations over who should crunch the pack ice blocking entry to NSF's logistics headquarters in Antarctica, saying NSF “shall procure icebreaking services from the Coast Guard.” That goes against a House preference for NSF to have “the most cost-effective means of obtaining icebreaking services.” It also rewrote an earlier version of its accompanying report that ordered the Coast Guard to pay for necessary repairs to its two polar-class icebreakers, replacing it with language calling for a “joint” resolution of the issue.


    Senate Resolution Backs Mandatory Emission Limits

    1. Eli Kintisch

    Seven years after rejecting the Kyoto climate treaty by a vote of 95-0, the U.S. Senate has affirmed the science of global warming and for the first time called for “mandatory market-based limits” on greenhouse gas emissions. The bipartisan resolution is not binding. But it repudiates the long-standing White House position that research and voluntary action are preferable to limits, and the resolution will be part of a massive energy bill approved this week by the Senate.

    “The sense of the Senate is changing,” says an aide to Senator Lamar Alexander (R-TN), one of 12 Republicans to support the resolution introduced by Senator Jeff Bingaman (D-NM). The statement was co-sponsored by Senator Pete Domenici (R-NM), chair of the Energy and Natural Resources Committee, which plans hearings this month on a regulatory system for greenhouse gases.

    The statement declares that “there is growing scientific consensus that human activity is a substantial cause of greenhouse gas accumulation.” The consequences, it says, include rising sea levels, temperatures increasing at a rate “outside the range of natural variability,” and more frequent and severe floods and droughts.

    Before introducing his resolution, Bingaman had withdrawn a plan that would have made carbon-emitting credits much cheaper by using the 2012 emission levels as a target by 2020 and allowing the government to sell credits at a fixed price. Domenici had shown interest in the plan, but he later decided that there wasn't enough time to work out the rules. However, once Domenici stepped forward, “industry became very interested,” says Paul Bledsoe, a spokesperson with the National Commission on Energy Policy, a group of scientists, policymakers, and business leaders whose recommendations last year formed the basis for the Bingaman proposal.

    Chipping away.

    A federal energy bill now headed to conference would encourage low-emissions technologies like biomass reactors through financial incentives and tax breaks.


    Passage of the nonbinding resolution followed the defeat of an emissions cap-and-trade system proposed by senators John McCain (R-AZ) and Joe Lieberman (D-CT). The plan, backed by many environmental groups, would use 2000 greenhouse gas emissions levels as a target for 2010 and set up a scheme of emissions credits; the credits then would be traded among emitters with no cost limits. This effort failed, by a vote of 60-38, for the second time in 2 years. During the debate, McCain criticized Domenici's reservations about picking industrial “winners and losers.” Said McCain: “I will tell you another loser, and that is the truth.” But Domenici deflected the attack: “To recognize there is a problem does not mean that [McCain's] way of solving it is the only solution.”

    Senator James Inhofe (R-OK) helped lead opposition to the Bingaman resolution, saying that several of its scientific assertions were “not true.” Bingaman aides said that Vice President Dick Cheney called for specific textual changes, including changing the word “mandatory” to “additional.” Cheney's office declined comment, although the White House has said that it opposes compulsory schemes. Inhofe's motion to block the resolution lost by a vote of 54-43.

    Other aspects of the more than $36 billion energy bill passed by the Senate could cut carbon emissions if enacted. A successful amendment penned by Senator Chuck Hagel (R-NE) would authorize loans and financial incentives for companies to research carbon-cutting technologies, although those measures must be approved separately by a spending panel before any money would be available. An amendment by Senator Frank Lautenberg (D-NJ) to combat the “alteration of federal climate-change reports” was ruled out of order. It was a response to recent news that one-time White House staffer Philip Cooney, a former petroleum industry lobbyist with no science training, had edited climate science documents.


    Joining Forces for Brain Tumor Research

    1. Jennifer Couzin

    Frustrated by the sluggish pace of brain tumor research and the often dismal prognosis for those afflicted, eight brain tumor nonprofits* in the United States and Canada are pooling up to $6 million total to finance risky, innovative research projects, potentially including mathematical modeling and studies of neural development and stem cells. The effort announced this week, called the Brain Tumor Funding Collaborative, is unusual in the disease advocacy world, where organizations in the same disease area are typically rivals competing aggressively for donations.

    Here, however, several foundations tentatively began discussing 2 years ago how to fuel brain tumor research. Roughly 41,000 people are diagnosed with brain tumors in the United States each year, and just under half of those tumors are malignant.

    “We really want to break out of the traditional mold,” says Susan Weiner, whose child died of a brain tumor. A cognitive psychologist and vice president for grants at the Children's Brain Tumor Foundation, Weiner notes that each of the eight groups had “to understand that you can't do it by yourself.” Each has pledged a certain amount (they decline to say how much) which will enable the collaborative to offer much larger individual grants—up to $600,000 per year—than each typically funds. They will begin accepting initial proposals in August and hope to announce the first awards in January.

    Brain cancer research is notoriously difficult, in part because the blood-brain barrier prevents easy access and because there's no good rodent model, says Susan Fitzpatrick, a neuroscientist and vice president of the McDonnell Foundation, another participant. But advances in genomics have begun to clarify brain cancer biology, leaving the collaborative hopeful that its effort, exceedingly challenging to pull together, says Fitzpatrick, will pay off.

    • * The American Brain Tumor Association, the Brain Tumor Foundation of Canada, the Brain Tumor Society, the Children's Brain Tumor Foundation, the Goldhirsh Foundation, the James S. McDonnell Foundation, the National Brain Tumor Foundation, and the Sontag Foundation.


    Gates Foundation Picks Winners in Grand Challenges in Global Health

    1. Jon Cohen
    1. * The funded projects are listed and described at

    In January 2003, Microsoft billionaire Bill Gates challenged scientists to think big. He asked them to identify critical problems that stand in the way of improving the health of people in developing countries, and he announced that the Bill and Melinda Gates Foundation would bankroll novel research projects aimed at solving them. Last week, after reviewing 1517 letters of intent and then inviting 445 investigators from 75 countries to submit full proposals, the foundation announced the winners: 43 projects that will receive a total of $437 million. “We all recognize that science and technology alone will not solve the health problems of the poor in the developing world,” says Richard Klausner, who runs the foundation's global health program. “What science and technology can and must do, however, is create the possibility of new vaccines, new approaches, and new cures for diseases and health conditions that for too long have been ignored.”

    The 5-year grants range from $579,000 to $20 million and address 14 “Grand Challenges in Global Health” that mainly focus on R&D for drugs and vaccines, controlling mosquitoes, genetically engineering improved crops, and developing new tools to gauge the health of individuals and entire populations. Grant recipients come from 33 countries—although more than half live in the United States—and include Nobel laureates and other prominent academics as well as investigators from biotechnology companies and government research institutions.* “These projects truly are on the cutting edge of science, and many of them are taking very important risks that others have shied away from,” says Elias Zerhouni, director of the U.S. National Institutes of Health in Bethesda, Maryland, who serves on the Grand Challenges board that evaluated the ideas.

    Klausner, who formerly ran the National Cancer Institute (NCI), said the idea for the Grand Challenges grew out of a meeting he had with Gates in the fall of 2002. Says Klausner: “He asked me an interesting question: 'When you were running NCI, did you have a war room with the 10 most critical questions, and were you monitoring the progress?'” They also discussed German mathematician David Hilbert, who in 1900 famously spelled out 23 problems that he predicted “the leading mathematical spirits of coming generations” would strive to solve.


    Richard Klausner (left) and Bill Gates confer at the 2003 World Economic Forum, where the initiative was launched.


    Gates announced the Grand Challenges initiative at the World Economic Forum in Davos, Switzerland, in January 2003, committing $200 million from his foundation. More than 1000 scientists suggested ideas that led the initiative's board to select 14 grand challenges (Science, 17 October 2003, p. 398). After sifting through the letters of intent and, subsequently, the full proposals, Gates decided to up the ante: The foundation contributed another $250 million; $27 million more came in from Britain's Wellcome Trust and $4.5 from the Canadian Institutes of Health Research.

    Researchers applying for grants had to spell out specific milestones, and they will not receive full funding unless they meet them. “We had lots of pushback from the scientific community, saying you can't have milestones,” says Klausner. “We kept saying try it, try it, try it.” Applicants also had to develop a “global access plan” that explained how poor countries could afford whatever they developed.

    Nobel laureate David Baltimore, who won a $13.9 million award to engineer adult stem cells that produce HIV antibodies not found naturally, was one of the scientists who pushed back. “At first, I thought it was overly bureaucratic and unnecessary,” said Baltimore, president of the California Institute of Technology in Pasadena. “But as a discipline, to make sure we knew what we were talking about, it turned out to be interesting. In no other grant do you so precisely lay out what you expect to happen.”

    Other grants went to researchers who hope to create vaccines that don't require refrigeration, modify mosquitoes so they die young, and improve bananas, rice, and cassavas. In addition to HIV/AIDS, targeted diseases include malaria, dengue, tuberculosis, pertussis, and hepatitis C. Many of the projects involve far-from-sexy science. “We had this idea we were supposed to be hit by bolts of lightning,” says Klausner. “But this is about solving problems. These things aren't often gee-whiz, they're one area applied to a new area.”

    Klausner says this is not a one-shot deal. “We're not being coy with people,” he says. “If they hit all their milestones and it looks spectacular, we would expect them to come back and ask for future funding.”


    Flying on the Edge: Bluebirds Make Use of Habitat Corridors

    1. Erik Stokstad

    In many parts of the world, landscapes are turning into isolated fragments of habitat. Conservation biologists and land managers often try to link these patches via connecting strips of habitat that, in theory, give animals better access to food and mates. But testing whether, and how, these so-called corridors work has been difficult.

    On page 146, a team led by ornithologist Douglas Levey of the University of Florida, Gainesville, and ecologist Nick Haddad of North Carolina State University in Raleigh describes the largest replicated, controlled study of corridor efficacy and reports that bluebirds prefer to travel along the edges of these habitat connectors. The study also shows that small-scale observations of behavior can be used to predict how animals move through larger landscapes. Such results have conservation biologists excited. “This provides a lot more confidence that corridors are working as hypothesized,” says ecologist Reed Noss of the University of Central Florida in Orlando.

    The study team created eight experimental sites in the pine forests of western South Carolina to test how corridors are used. Within each, five patches of forest were cut down to make the open habitat that eastern bluebirds (Sialia sialis) prefer. The central “source” patch, 100 meters by 100 meters, was connected to another “receiver” patch by a 150-meter-long corridor. Each site also had three patches isolated from the source, at least one of which had “wings”—dead-end corridors on either side—in order to test the idea that even unlinked corridors help organisms find patches of natural habitat. “It's a very clever experiment,” comments Stuart Pimm of Duke University in Durham, North Carolina.

    The middles of the source patches were planted with wax myrtle bushes, whose fruits are a major food resource for the bluebirds. For two field seasons, Levey's postdoc Joshua Tewksbury, who is now at the University of Washington, Seattle, and others tracked single birds in the source patch as they flew from the wax myrtle bushes to other perches within patches or the surrounding forest. For each hop, until the birds flew out of sight, they noted the direction and distance traveled—usually no more than 20 meters—and the resting time at each perch. The birds' movements weren't totally random; when they encountered an edge of a patch, for example, they most often flew parallel to it.


    Bluebirds (left) used corridors to travel between patches of habitat (white rectangles) experimentally created in a pine forest (red).


    The researchers then developed a computer model in which short bird flights mimicking the observational data were stitched together to simulate a 45-minute journey—the estimated time it takes a bird to digest fruit and excrete seeds—that took a simulated bird sometimes more than 250 meters from its starting point. After tens of thousands of runs, the model predicted that birds in a source patch were 31% more likely to end up in the connected patch than in unconnected ones.

    To test the model, the researchers sprayed a fluorescent solution onto wax myrtle fruit in the source patches. Each week, they checked pole-mounted flowerpots in the four surrounding patches for any bird defecations with fluorescent seeds. Although they couldn't identify what kinds of birds had deposited the seeds, bluebirds were the most common species to perch over the pots.

    After analyzing 11,000 defecations, they found that seeds were 37% more likely to occur in the connected receiver patch than in the isolated ones, backing up the model prediction. Also mirroring the model, there was no significant difference in seed number between the isolated patches that had the dead-end wings and those that did not, suggesting that the birds weren't using that type of corridor to find habitat patches.

    Experts caution that it's difficult to generalize these results about corridor use to other species. But the basic point that small-scale observations can reliably inform landscape design is good news for those who can't afford to run large experiments. “It is comforting to conservation planners that one of the first attempts to scale up has proven quite successful,” says Paul Beier of Northern Arizona University in Flagstaff.

    The observations also provided insight into how bluebirds use corridors. Instead of flying down the middle, the bluebirds tended to stay along their edges in the pine plantations. The trees there may offer higher perches than the shrubby opening or better protection from hawks. One implication, for bluebirds at least, is that the width of a corridor or the quality of its habitat may not matter as much as that it has edges. Levey suspects that this edge effect holds true for other animals. But Beier points out that the experimental habitat differs from most corridors, which are usually strips of forest running through urban or agricultural land.


    Hard-Liner's Triumph Puts Research Plans in Doubt

    1. Richard Stone

    TEHRAN—Shapour Etemad was stunned by the victory of Tehran's hard-line mayor Mahmoud Ahmadinejad in last week's presidential runoff election. Like many intellectuals, Etemad, director of the National Research Institute for Science Policy in Tehran, had campaigned for a moderate government, adding his name to a public endorsement of former president Hashemi Rafsanjani. After Ahmadinejad's surprise landslide victory, Etemad was left wondering if he should resign his influential post and retreat to academia.

    Many Iranians were troubled by the stark choices in this election. Ahmadinejad campaigned on a promise to breathe new life into the Islamic revolution, whereas Rafsanjani pledged to seek closer ties with the United States. Although Ahmadinejad has not aired his views on science, some researchers fear that his ascendancy could result in a curtailment of foreign collaborations, an accelerated brain drain, and a shift toward more applied projects.

    That's not what Iranian scientists want to hear, given the distance they've come since 1979 when the Islamic revolution closed universities for 4 years. “We were completely isolated,” says string theorist Hessamaddin Arfaei, deputy director of research at the Institute for Studies in Theoretical Physics and Mathematics in Tehran. Stagnation deepened during the protracted Iran-Iraq war in the 1980s; afterward, U.S. economic sanctions slowed the recovery.

    Unknown quantity.

    A proponent of the Islamic revolution, President Ahmadinejad has not made known his views on science.


    It's only recently that Iranian science has enjoyed a widespread renaissance. The number of foreign collaborations has risen threefold in the past 4 years, says Iran's deputy minister of science, research, and technology, Reza Mansouri. “Scientific output has skyrocketed since 1993,” boasts Mohammad Javad Rasaee, dean of medical sciences at Tarbiat Modares University. Iran's share of global scientific output rose from 0.0003% in 1970 to 0.29% in 2003, with much of the growth occurring since the early 1990s, according to a study earlier this year in the journal Scientometrics. The analysis, led by immunologist Mostafa Moin of Children's Medical Center in Tehran, was based on publications tracked by the Institute for Scientific Information in Philadelphia, Pennsylvania. (Moin, a former science minister, ran as the sole reform candidate for president, placing fifth in the election's first round.)

    But momentum is in danger of being lost, some observers warn. After Moin was eliminated from the race, Etemad and a few dozen colleagues wrote an editorial in East newspaper on 20 June, urging “all cultivated people” to vote for Rafsanjani and arguing that “a total catastrophe is pending and immediate.”

    Others caution against rushing to judgment. Mansouri anticipates only “minor fluctuations” for Iranian scientists. The situation will become clearer, he says, when the new government, including a science minister, is appointed in early August. And some found hope in last week's offer by the board of the American Institute of Aeronautics and Astronautics to suspend a controversial ban on publications from Iran and three other countries (Science, 17 June, p. 1722); AIAA stated it will “formally reconsider” the policy on 1 September.

    Ahmadinejad's predilections may become apparent when a high council for science and technology, chaired by the president, meets this fall. The council, created earlier this year, controls most of Iran's science budget. Others argue that the country's scientific community has weathered previous changes of government well. “My thinking is that we will be affected very little, if at all,” says Yousef Sobouti, director of the Institute for Advanced Studies in Basic Sciences in Zanjan. But even if some fears have been exaggerated, Etemad predicts, “we're in for a long, hard time.”


    EPA Ponders Voluntary Nanotechnology Regulations

    1. Robert F. Service*
    1. With reporting by Amitabh Avasthi.

    Last week, the U.S. Environmental Protection Agency (EPA) held its first public meeting to gauge sentiments about a proposed voluntary pilot program to collect information on new nanomaterials that companies are making. The agency got an earful.

    More than 200 people gathered here at the Washington Plaza hotel to weigh in on the program, a possible precursor to guidelines that would mark the agency's first attempt to regulate nanotechnology. In a document released before the meeting, a coalition of 18 environmental and health-advocacy groups charged that a voluntary program would be inadequate to protect people from new chemical hazards. But most makers of nanomaterials applauded EPA's initial move as appropriate, because so little is known about the possible hazards of nanosized particles.

    “The meeting was like the blind man feeling the elephant,” says David Rejeski. He heads a new 2-year project at the Woodrow Wilson International Center for Scholars in Washington, D.C., on managing health and environmental impacts of nanotechnology. EPA and other agencies are still sorting out the scale of the challenge they face, Rejeski says.


    The sheer diversity of nanoparticles makes it hard to tell which ones pose risks.


    Nanomaterials put regulators in an unfamiliar bind. With traditional chemical toxins, any two molecules with the same chemical formula look and behave alike. Two nano-particles made of the same elements but of different sizes, however, may have drastically different chemical properties. Even particles of the same size and elemental composition can have very different properties, due to differences in their chemical architecture—for example, diamond nanocrystals and buckyballs shaped like soccer balls, both made of pure carbon. That diversity makes it a daunting task to sort out just which particles are hazardous to people and the environment and to control their production and release.

    As a first step, EPA is thinking about asking nanomaterials makers to submit information on just what they are producing, how much is made, and possible worker exposure. “That's a good first step,” says Sean Murdoch, executive director of the NanoBusiness Alliance in Chicago, Illinois. But Jennifer Sass of the Natural Resources Defense Council in Washington, D.C., argues that asking companies to participate voluntarily doesn't go far enough. “It's going to be tough getting these companies to be good corporate citizens without the threat of regulation hanging over their heads,” Sass notes.

    Nearly everyone agrees that far more information is needed. To get it, some groups are starting to call for increased funding for toxicity and health studies on nanoparticles. In a commentary in the 14 June Wall Street Journal, Fred Krupp, president of Environmental Defense, and Chad Holliday, chair and CEO of DuPont, argued that funding for environmental health and safety studies of nanotechnology should rise from its current level of 4% to 10% of the $1.2 billion budget of the National Nanotechnology Initiative.

    Rejeski argues that before a set dollar figure is agreed upon, policymakers need to decide what information they need in order to draw up nano regulations. Then, he says, they can determine how much money is needed to fill those holes. Rejeski adds that his team is currently drawing up just such an analysis and plans to release it later this summer.

  11. 2006 BUDGET

    Can Congress Save NASA Science?

    1. Andrew Lawler

    In a remarkable show of bipartisan concern, U.S. lawmakers have ordered NASA not to sacrifice research programs to pay for President George W. Bush's vision of humans on the moon and eventually Mars. But at the same time, they may have compounded NASA's problems by giving a tentative green light to Bush's plans while providing little relief for an impending budget crunch in science.

    Last week, a Senate funding panel told NASA to spend an additional $400 million in its 2006 budget to fix the Hubble Space Telescope and bolster the flagging earth sciences effort. But the panel added only $134 million to NASA's $4 billion science budget to do so. Likewise, the House version of the spending bill, passed 2 weeks ago, is sympathetic to science but provides a relatively paltry $40 million increase over the president's request, most of which would go to saving the Glory earth science project. Reconciling the two pieces of legislation, one NASA manager says, “is sure to be difficult and confusing.” Compounding the problem are a spate of cost overruns in research projects and growing pressure to divert money to efforts like a new human space launcher to replace the space shuttle, which is due to return to flight later this month.

    NASA's new boss Michael Griffin has added another wrinkle: He's likely to rescue several science projects that the agency planned to cancel to save money. He recently ordered continued operation of the Tropical Rainfall Measurement Mission, which NASA sought to turn off last year in a decision that triggered a congressional outcry (Science, 13 August 2004, p. 927). NASA's efforts to win funding from the National Oceanic and Atmospheric Administration failed, so the space agency must shoulder the entire $16 million needed to keep it functioning through 2009, says NASA spokesperson Delores Beasley. Griffin is also under pressure not to turn off a host of other spacecraft, including Voyager 1 and 2, now under review for termination. Each has staunch supporters in Congress.

    Shuttle diplomacy.

    NASA must balance competing needs, such as returning the shuttle to flight, while planning a new mission to Jupiter's moon Europa.


    Griffin also recently promised senators a mission to Jupiter's moon Europa in the middle of the next decade, an effort sure to cost upward of $1 billion even with help from the European Space Agency. Congress likes the idea, and the House funding panel urged the agency to include Europa as a new start in 2007. But how that mission will fit into an increasingly strained long-term budget remains a mystery. This week, Griffin told Congress that it would be “rather dumb” to turn off Voyager 1 and 2, a cost-saving move in NASA's 2006 budget request.

    A team of agency officials and outside researchers, meanwhile, is working on ways to cope with a $1 billion cost overrun for the James Webb Space Telescope. That report is due later this summer. Cost increases in the Solar Dynamic Observatory and other missions that are already well into development are worrying agency managers.

    The fate of space station science also hangs in the balance. A sweeping internal NASA study laying out a revamped construction schedule for the international space station is due in July. NASA officials say that they must decrease the 28 flights now planned to meet the president's 2010 deadline for halting shuttle flights. That change, they add, is certain to reduce the number of missions devoted to orbiting research equipment and experiments.

    One likely victim, Griffin told Congress, is the centrifuge, once the central facility for station research. Life scientists will need to “go elsewhere,” he says. “I cannot put microbiology and fundamental life sciences higher than” the need for a new launch vehicle for astronauts.

    In contrast, preserving science aboard the station is one of the goals of a bill introduced last week to reauthorize NASA programs. “Such a restriction on the range of research disciplines aboard the [space station] is not in the best interest of the nation or of our partners,” says its sponsor, Senator Kay Bailey Hutchison (R-TX). The bill calls for NASA to spend an additional $100 million on station research in the next 5 years and come up with a revamped research plan.


    Flowing Crystals Flummox Physicists

    1. Adrian Cho

    Solidified helium appears to flow without any resistance. How that happens is anything but crystal clear

    UNIVERSITY PARK, PENNSYLVANIA—The gizmo could be mistaken for an artifact from a science museum or a custom-made part for your old VW Beetle. An aluminum cylinder 13 millimeters wide and 5 millimeters tall sits atop a slender post of beryllium copper. From its sides, two flaps protrude like large ears on a small boy's head, and fine wires festoon the top of the can. As Eunseong Kim, a postdoc here at Pennsylvania State University (Penn State), cradles it in his palm, the device hardly looks like the heart of a breakthrough physics experiment. Yet it produced perhaps the weirdest stuff ever made.

    Last year, while Kim was a graduate student, he and physicist Moses Chan used the can to squeeze ultracold helium into a crystalline solid that appears to flow without resistance—like a liquid with no viscosity. For decades physicists had mused about such a bizarre “supersolid,” and others had searched for and failed to find it. So Kim and Chan's results have touched off a flurry of activity among experimenters and a debate among theorists as to whether it's even possible for a perfect crystal to flow. They are rejuvenating helium physics, a small field that has played a large role in shaping modern physics (see sidebar, p. 39).

    Kim and Chan had previously seen signs of such “superfluid” flow in solid helium crammed into the pores of a spongelike glass. But on 3 January 2004 they saw the first clear evidence that it could occur in a pure crystal. “That was an exciting moment,” the soft-spoken Kim recalls as he sits at his desk in the subbasement of Osmond Hall. “That morning Moses came into my lab and I said to him, 'Maybe you'll get the Nobel Prize.'”

    Many others agree. “If verified, the discovery of supersolid helium will be one of the most important results in solid state physics—period,” says Jason Ho, a theorist at Ohio State University in Columbus, who has come to Penn State to discuss his theory of the phenomenon. Anthony Leggett, a theorist at the University of Illinois, Urbana-Champaign, says the observations challenge the widely held belief that a well-ordered crystal cannot enter a free-flowing supersolid “phase.” “I would have bet quite strongly against such a phase,” says Leggett in a phone interview from his office. “It looks like the experiments will make me rethink that.”

    Kim and Chan's results must still be confirmed, however. And physicists must deduce whether the helium crystal itself is flowing, or whether the effect arises from the superfluid flow of liquid helium in cracks and crevices in the crystal—a less mind-bending alternative that wouldn't count as supersolidity. To make the call, researchers are tackling new experiments that should challenge even helium physicists, who enjoy a reputation as expert tinkerers who can squeeze every drop of information out of a thimbleful of helium.


    When running, Kim and Chan's cryostat hides inside a container of liquid helium.


    Letting go

    Prone to burst into effusive laughter, Moses Chan talks fast and forcefully. But when discussing solid helium, he chooses his words carefully. “I think it's safe to say we've done all the possible control experiments,” he says. “And though it sounds weird, I think the simplest explanation is that we see superfluidity in a solid.” It's a big claim. Chan is saying that a material structurally similar to rock salt oozes through itself unimpeded. Yet other physicists agree with his assessment of the situation. Of course, they're used to the perversity of helium.

    Since it was first liquefied nearly a century ago, physicists have puzzled over ultracold helium. Every other element freezes at some temperature, but unpressurized helium remains a liquid all the way down to absolute zero. Below 2.17 kelvin, however, the most common helium isotope, helium-4, undergoes a stranger transformation: It becomes a superfluid that flows without any resistance. That happens because, compared with other atoms, light and lively helium atoms act a bit less like billiard balls and a bit more like quantum waves. At low enough temperatures, many collapse into a single quantum wave that resists disturbances, in a process known as Bose-Einstein condensation.

    Theorists have long speculated that something similar might happen in solid helium-4, which can be made by squeezing the liquid to 25 times atmospheric pressure. In 1969, Russian theorists A. F. Andreev and I. M. Lifshitz proposed that missing atoms, or vacancies, within the helium crystal could condense to form a free-flowing fluid of their own, which would mimic the flow of atoms through the liquid. But no one had seen any sign of a flowing solid until now.

    To spot the stuff, Kim and Chan set their little can twisting back and forth on its shaft. The frequency of oscillation depends on the stiffness of the shaft and the inertia of the can, which in turn depends on the amount of helium stuck to it. At pressures ranging up to 145 times atmospheric pressure, the frequency began to rise suddenly as the temperature sank below about 0.2 kelvin. Those upswings indicated that as much as 1% of the helium was letting go of the oscillator and standing stock-still while the rest of the crystal twisted back and forth, as Kim and Chan reported online on 2 September 2004 in Science. That strange behavior implies that some of the helium glided through the crystal without resistance.

    In principle, the experiment is simple. In practice, it's a challenge, as a glimpse of the guts of one of Chan's refrigerators, or “cryostats,” suggests. The assemblage of tubes, wires, coils, and myriad handmade gadgets hangs like a mechanical stalactite from a platform supported by four great wooden legs. The whole thing stands inside a metal box the size of a small room, which Chan installed to block out radio interference from a nearby campus police station. From the tip of the stalactite hangs the oscillating can; when the can is twisting, its flaps move as little as a single atom's width. Kim and Chan measure changes in the oscillator's frequency to part-in-a-million precision.

    Kim and Chan performed a series of experiments that slowly built the case for supersolid helium. For example, they replaced the helium-4 with the isotope helium-3, the atoms of which cannot crowd into a single quantum state because of the way they spin. That implies solid helium-3 should not flow, which is what the experimenters observed. “Moses was very careful and asked all the questions that he could ask with the kind of apparatus he had,” says experimenter Richard Packard, from his office at the University of California, Berkeley. “And all the answers indicate that something lets go” in helium-4.

    But although researchers agree that the experiments are sound, they disagree on how to explain them. And no one knows whether a crystal really can flow.


    Eunseong Kim holds oscillator that first hinted at flow in “bulk” helium.


    Exchange, grains, and defects

    Ohio State's Ho has no doubt that it can. He has come to Penn State to discuss the theory he is developing, which assumes that supersolid flow occurs and attempts to explain how swirls resembling smoke rings reduce the flow as the temperature increases. In a conference room on the third floor of Davey Hall, Ho stands beside a viewgraph projector and gestures at the screen with a length of half-inch threaded steel rod. “If it gets too complicated, then I apologize,” he says. He's been talking for 90 minutes and will go on for another hour. He's covered blackboards on two walls with equations. It seems supersolidity has no easy explanation.

    Theorists have already advanced several ideas, but most run afoul of the data in one way or another. For example, Andreev and Lifshitz's notion of a quantum wave of vacancies jibed nicely with the results of Kim and Chan's experiments with helium in porous glass, reported in January 2004 in Nature. It seemed plausible that, cramped by the nanometer-sized pores, the crystals would be riddled with vacancies. But this scheme appears less likely in the “bulk” crystal, as experimental evidence suggests that pure solid helium has very few vacancies. And if the vacancies are mobile, then they should quickly wander to the edge of the crystal and vanish, anyway.

    If vacancies don't explain the flow, then perhaps some of the helium atoms themselves undergo Bose-Einstein condensation within the crystal. Leggett and others explored that idea in the 1970s. At first it sounds absurd: In a crystal, each atom is ordinarily confined to a specific site in the three-dimensional “crystal lattice,” whereas in the quantum wave of a Bose-Einstein condensate it's impossible to say precisely where any particular atom is. But thanks to their quantum-wave nature, neighboring helium atoms have a tendency to swap places spontaneously in a process called “exchange.” If they trade places often enough, then in principle some of them may be able to collapse into a single wave and flow in a way that leaves the pristine crystal structure unsullied.

    In actuality, that scenario may be unlikely, however. Leggett originally calculated that the amount of free-flowing helium would be tiny. And computer simulations suggest that a perfectly orderly helium crystal does not undergo Bose-Einstein condensation, as theorists David Ceperley of the University of Illinois, Urbana-Champaign, and Bernard Bernu of Pierre and Marie Curie University in Paris, France, reported in October 2004 in Physical Review Letters. “There's been a lot of speculation that somehow you can get a flow of atoms in a solid,” Ceperley says in a phone interview. “I just don't think that's possible.”

    Some theorists have suggested that supersolid helium is really superslushy helium, with the flow occurring in liquid seeping between tiny bits of solid. Boris Svistunov and Nikolay Prokof'ev of the University of Massachusetts (UMass), Amherst, note that solid helium undoubtedly consists not of a single large crystal but of many smaller interlocking crystalline grains. They calculate that more conventional superfluid liquid flowing between the grains might account for Kim and Chan's data, as they reported this April in Physical Review Letters. But that explanation would require a multitude of micrometer-sized grains, whereas data indicate that helium tends to form fewer, larger grains.

    The secret to supersolidity could lie in the conceptual middle ground between a flowing crystal and liquid flowing between crystal grains, says theorist Burkhard Militzer from his office at the Carnegie Institution of Washington, D.C. The flow could occur within the crystal, he speculates, but along elongated, immobile defects called “stacking faults,” which resemble missed stitches in a piece of fabric. Simulations show that atoms swap places easily along the faults, Militzer says, but they do not yet prove that such faults account for Chan's observations.

    Shaking things up.

    Moses Chan (left) and colleagues Eunseong Kim (center) and Anthony Clark have theorists debating whether a crystal can flow.


    More data, please

    To sort things out, physicists are planning a variety of experiments designed to confirm the observation—and to explain why others had failed to spot the effect before.

    In 1981, researchers from Cornell University in Ithaca, New York, and Bell Telephone Laboratories in Murray Hill, New Jersey, saw no evidence for supersolidity in torsional oscillator experiments similar to Chan's. But the researchers may have been foiled by bad luck and a bit of helium-3. The experiment most comparable to Chan's was contaminated with several parts per million of helium-3, says Cornell's John Reppy in a phone interview. “That would have been enough to wipe out the signal, according to Moses's [data],” he says. “I'm willing to believe it.” Reppy and colleagues at Cornell are running yet another torsional oscillator experiment to try to reproduce the effect.

    Others have searched for supersolidity by trying to squeeze solid helium through tiny holes. If the crystal is free-flowing then it might seep through, just as superfluid liquid helium will flow through openings so small they stop ordinary liquid helium dead. But the fact that solid helium cannot perform this trick may mean only that supersolid and superfluid helium respond to pressure differently, says John Beamish, an experimenter at the University of Alberta in Edmonton, Canada: “If it is a supersolid—and we're not saying it isn't—it doesn't flow as you would naively expect.”

    Curiously, over the past decade, experimenters studying the propagation of sound and heat in solid helium did see signs of a “phase transition.” But they interpreted them very differently. John Goodkind and colleagues at the University of California, San Diego, found that the speed of sound in solid helium increases suddenly as the temperature drops below 0.2 kelvin, and the rate at which the waves die away peaks at that temperature. Goodkind interpreted these and other signs as evidence that some sort of defect proliferates as the temperature of the crystal increases and that these “defectons” undergo Bose-Einstein condensation above a critical temperature.

    Goodkind hopes to resume his experiments, and Haruo Kojima, a physicist at Rutgers University in Piscataway, New Jersey, has begun sound experiments of his own. If supersolidity exists, then it should be possible to generate a sound wave in which only the free-flowing helium moves, explains Kojima, who has come to Penn State to discuss the experiments with Chan. But the experiments may be tricky, he warns, because researchers aren't sure precisely what signals they should expect to see.

    For his part, Chan is devising an elaborate experiment to determine just how many vacancies, grain boundaries, and defects exist in a helium crystal. He plans to run a torsional oscillator in the beamline of a synchrotron x-ray source and to alternately shake the crystal and shine x-rays through it. The sloshing of the oscillator will tell how many atoms are in the crystal, while the scattered x-rays will reveal how many lattice sites there are in it. Only if the crystal is perfect will the two numbers be equal. The experiment may be the key to cutting through the confusion, says UMass's Svistunov in a phone interview: “To answer, how perfect is the crystal? In my opinion, that is the most important question in the field.”

    Meanwhile, in the sunless subbasement of Osmond Hall, Chan's young colleagues continue their work. Kim is taking data with a bigger oscillator that twists at lower frequencies. Graduate student Anthony Clark is studying solid hydrogen. In March, at the American Physical Society meeting in Los Angeles, California, Clark presented preliminary data that suggest hydrogen may also become a supersolid (Science, 8 April, p. 190). “I want to be completely confident,” Clark says, “and we've been doing a lot of control experiments.”

    Both Kim and Clark say they feel intense pressure working on such potentially groundbreaking experiments. Chan takes the hubbub in stride, however. “Nobel Prize or no Nobel Prize, that doesn't matter. What's really nice is that [our work] has attracted so much attention” from other researchers, he says. “We have already had more fun than we deserve.” He smiles wryly, like a magician who has pulled off a particularly clever trick. Only this time, not even the conjurer knows precisely how the trick works.


    The Quirks and Culture of Helium

    1. Adrian Cho

    Ordinarily an inert gas so light it floats off into space, helium might seem to hold little interest for condensed-matter physicists. But since it was liquefied by Dutch physicist Heike Kamerlingh Onnes in 1908, the odd stuff has revealed much about the physics of liquids and solids. “Throughout history, it has provided a variety of new paradigms,” says Jason Ho, a theorist at Ohio State University in Columbus.

    Since 1938, physicists have known that below 2.17 kelvin the most common isotope of helium, helium-4, becomes a “superfluid” that flows without resistance, as about 9% of the atoms crowd into a single quantum wave. In 1972, physicists discovered that helium-3 also becomes a superfluid at just a few thousandths of a kelvin. Because of the way they spin, helium-3 atoms cannot pile into a single quantum wave. Instead, they form pairs that glide without resistance, as electrons do in superconductors.

    Experiments with helium-3 validated much of the “Fermi liquid theory” that also describes electrons in metals. The superfluid transition in helium-4 provided the primary test bed for the theory of “second-order phase transitions,” which describes, for example, the onset of magnetism in materials.

    While helium has helped theorists develop key concepts, experimenters working with ultracold helium have developed a reputation for old-fashioned ingenuity. Their experimental devices are usually mechanical contraptions that shake, spin, and squeeze helium to produce subtle but telling signals. By tradition, “you don't buy your instrumentation; you invent it,” says John Goodkind, an experimenter at the University of California, San Diego. “You make it, you leak-check it, and you fix it.”

    Helium physicists are also known for seat-of-the-pants problem solving, slathering their refrigerators with soap and glycerin to plug elusive leaks so small only superfluid helium squeezes through, or using a condom to regulate the flow of gas.

    Never very big, the field of helium physics has contracted since its heyday in the 1970s. But researchers trained in helium physics have become leaders in high-temperature superconductivity, nanomechanical devices, two-dimensional electron systems, and other areas. “The people in the field are willing to take risks,” says Richard Packard, an experimenter at the University of California, Berkeley. “They aren't afraid of making new devices, and when they go out into other fields, that same state of mind goes with them.”


    Atlantic Climate Pacemaker for Millennia Past, Decades Hence?

    1. Richard A. Kerr

    An unsteady ocean conveyor delivering heat to the far North Atlantic has been abetting everything from rising temperatures to surging hurricanes, but look for a turnaround soon

    Benjamin Franklin knew about the warm Gulf Stream that flows north and east off the North American coast, ferrying more than a petawatt of heating power to the chilly far North Atlantic. But he could have had little inkling of the role that this ponderous ocean circulation has had in the climatic vicissitudes of the greater Atlantic region and even the globe.

    With a longer view of climate history and long-running climate models, today's researchers are tying decades-long oscillations in the Gulf Stream and the rest of the ocean conveyor to long-recognized fluctuations in Atlantic sea-surface temperatures. These fluctuations, in turn, seem to have helped drive the recent revival of Atlantic hurricanes, the drying of the Sahel in the 1970s and '80s, and the global warming of the past few decades, among other climate trends.

    The ocean conveyor “is an important source of climate variability,” says meteorologist James Hurrell of the National Center for Atmospheric Research in Boulder, Colorado. “There's increasing evidence of the important role oceans have played in climate change.” And there are growing signs that the conveyor may well begin to slow on its own within a decade or two, temporarily cooling the Atlantic and possibly reversing many recent climate effects. Greenhouse warming will prevail globally in both the short and long terms, but sorting out just what the coming decades of climate change will be like in your neighborhood could be a daunting challenge.

    Wobbly ocean.

    North Atlantic temperatures have wavered up and down at a roughly 60- to 80-year pace.


    Researchers agree that the North Atlantic climate machine has been revving up and down lately (Science, 16 June 2000, p. 1984). From recorded temperatures and climate proxies such as tree rings, researchers could see that temperatures around the North Atlantic had risen and fallen in a roughly 60- to 80-year cycle over the past few centuries. This climate variability was dubbed the Atlantic Multidecadal Oscillation (AMO). Ocean observations suggested that a weakening of the ocean conveyor could have cooled the Atlantic region and even the entire Northern Hemisphere in the 1950s and '60s, and a subsequent strengthening could have helped warm it in the 1980s and '90s. But the records were too short to prove that the AMO is a permanent fixture of the climate system or that variations in the conveyor drive the AMO.

    Taking the long view, climate modeler Jeff Knight of the Hadley Centre for Climate Prediction and Research in Exeter, U.K., and colleagues analyzed a 1400-year-long simulation on the Hadley Centre's HadCM3 model, one of the world's leading climate models. The simulations included no changes in climate drivers such as greenhouse gases that could force climate change. Any changes that appeared had to represent natural variations of the model's climate system.

    At April's meeting of the European Geosciences Union in Vienna, Austria, Knight and colleagues reported that the Hadley Centre model produces a rather realistic AMO with a period of 70 to 120 years. And the model AMO persists throughout the 1400-year run, they note, suggesting that the real-world AMO goes back much further than the past century of observations does. The model AMO also tends to be in step with oscillations in the strength of the model's conveyor flow, implying that real-world conveyor variability does indeed drive the AMO.

    Such strong similarities between a model and reality “suggest to me it's quite likely” that the actual Atlantic Ocean works much the same way as the model's does, says climate modeler Peter Stott of the Hadley Centre unit in Reading, who did not participate in the analysis. Hadley model simulations also support the AMO's involvement in prominent regional climate events, such as recurrent drought in North East Brazil and in the Sahel region of northern Africa, as well as variations in the formation of tropical Atlantic hurricanes, including the resurgence of such hurricanes in the 1990s.

    On page 115, climate modelers Rowan Sutton and Daniel Hodson of the University of Reading, U.K., report that they could simulate the way relatively warm, dry summers in the central United States in the 1930s through the 1960s became cooler and wetter in the 1960s through 1980s. All that was needed was to insert the AMO pattern of sea-surface temperature into the Hadley atmospheric model. That implies that the AMO contributed to the multidecadal seesawing of summertime climate in the region.

    Bad warmth.

    The AMO's warm years favor more U.S. hurricanes (right).


    If the Hadley model's AMO works as well as it seems to, Knight and his colleagues argue, it should serve as some guide to the future. For example, if North Atlantic temperatures track the conveyor's flow as well in the real world as they do in the model, then the conveyor has been accelerating during the past 35 years—not beginning to slow, as some signs had hinted (Science, 16 April 2004, p. 371). That acceleration could account for about 10% to 25% of the global warming seen since the mid-1970s, they calculate, meaning that rising greenhouse gases haven't been warming the world quite as fast as was thought.

    Judging by the 1400-year simulation's AMO, Knight and colleagues predict that the conveyor will begin to slow within a decade or so. Subsequent slowing would offset—although only temporarily—a “fairly small fraction” of the greenhouse warming expected in the Northern Hemisphere in the next 30 years. Likewise, Sutton and Hodson predict more drought-prone summers in the central United States in the next few decades.

    But don't bet on any of this just yet. The AMO “is not as regular as clockwork,” says Knight; it's quasi-periodic, not strictly periodic. And no one knows what effect the strengthening greenhouse might have on the AMO, adds Sutton. Most helpful would be an understanding of the AMO's ultimate pacemaker. In the Hadley Centre model, report modelers Michael Vellinga and Peili Wu of the Hadley Centre in Exeter in the December Journal of Climate, the pulsations of the conveyor are timed by the slow wheeling of water around the North Atlantic. It takes about 50 years for fresher-than-normal water created in the tropics by the strengthened conveyor to reach the far north. There, the fresher waters, being less dense, are less inclined to sink and slide back south. The sinking—and therefore the conveyor—slows down, cooling the North Atlantic and reversing the cycle.

    That may be how the Hadley AMO works, says oceanographer Jochem Marotzke of the Max Planck Institute for Meteorology in Hamburg, Germany, but it doesn't settle the mechanism question. How a model generates multidecadal Atlantic variability “seems to be dependent on the model you choose,” he says. Before even tentative forecasts of future AMO behavior are taken seriously, other leading models will have to weigh in, too.


    Suitcase-Sized Space Telescope Fills a Big Stellar Niche

    1. Andrew Fazekas*
    1. Andrew Fazekas is a freelance writer in Montreal, Quebec.

    Small but single-minded, Canada's MOST microsatellite is revealing the inner clockwork of stars and characterizing exoplanetary systems

    MONTREAL, CANADA—To astronomers, bigger telescopes usually mean better telescopes. But a Canadian space-based instrument is bucking that trend. Just 2 years into monitoring subtle periodic dips in starlight, the suitcase-sized MOST (Microvariability and Oscillations of Stars) telescope is probing the hidden internal structures of sunlike stars and pinning their ages down to a greater precision than ever before. At a meeting here,* astronomers announced that MOST has also begun to provide information about the planets that orbit some of those stars, even hinting at their weather patterns. “Not bad for a space telescope with a mirror the size of a pie plate and a price tag of $10 million Canadian, eh?” says astronomer Jaymie Matthews of the University of British Columbia.

    MOST blasted into space aboard a converted Russian intercontinental ballistic missile on 30 June 2003. Nicknamed “the Humble Space Telescope,” Canada's first space observatory is also the world's smallest, weighing in at only 60 kg and sporting a modest 15-cm mirror. Designed and built for less than 1/20 of the projected cost of any upcoming competing mission, the single-purpose satellite does without most of the instruments found on its larger space-based cousins but still conducts science no other orbiting observatory is equipped to do.

    Above the blurring effect of our atmosphere, MOST's ultraprecise photometer can detect fluctuations in stellar brightness as small as one part in a million—10 times better than ground-based telescopes can achieve. Thanks to a specially designed gyroscope, the Canadian Space Agency-run microsatellite can stare at a star around the clock for up to 2 months. The Hubble Space Telescope, by contrast, can look at a given object for only about 6 days. “MOST is pushing frontiers in stellar astronomy in terms of time sampling and light-measuring precision,” Matthews says. “While this may seem more abstract than what Hubble can do, it is just as revolutionary in terms of what this tiny telescope allows us to see in stars and their planets.”

    Using methods of asteroseismology—the study of starquakes—MOST monitors optical pulsations caused by vibrations of sound waves coursing through a stars' deep interior. Just as geologists can map Earth's interior from earthquake signals, astronomers can probe a star's hidden structure by tracking minute oscillations in its luminosity. As the star contracts, its internal pressure increases, heating its gases and temporarily increasing its brightness. The MOST team hopes the technique will lead to better theories about how stars evolve with age.

    Packed with potential.

    Boxy MOST focuses on doing one thing very well.


    “Most of the research is being done on sunlike stars, because we know how to interpret the data using our sun as a model,” says starquake hunter Jørgen Christensen-Dalsgaard of the University of Aarhus in Denmark. According to astrophysical models, stars between 80% and 170% as massive as the sun pass through the same basic life cycles as the sun does and should show similar upper atmosphere turbulence and micro-magnitude oscillations. But whereas short, subtle changes in brightness are relatively easy to detect on the sun, they are much trickier to spot in more-distant sunlike stars.

    Not until 2000 did ground-based telescopes become sensitive enough to confirm them in a few dozen solar-type stars. Those observations used spectroscopes to detect shifts in the color of light, from which astronomers could calculate the radial velocity of the stellar surface as it moves up and down. Now MOST—which makes it possible to draw similar inferences from much smaller changes in brightness—is opening a new chapter in the field, says astronomer Travis Metcalfe of the High Altitude Observatory in Boulder, Colorado: “This modest instrument is bound to have a great impact on our understanding of stellar evolution.”

    In July 2004—a year into its observations—MOST's science team, led by Matthews, generated their own waves in the asteroseismology community when they published their observations on the well-studied star Procyon. To the shock of everyone, the satellite found that Procyon showed none of the oscillations that ground-based measurements had seen and theoretical models had predicted for nearly 20 years. “We had 32 continuous days of data representing over a quarter of a million individual measurements and saw nothing,” says Matthews.

    Asteroseismologists around the world are still puzzling over those observations. Christensen-Dalsgaard, a member of one of the first teams to detect Procyon's oscillations from the ground and biggest critic of MOST's Procyon results, suspects that either light scattered back from Earth into the telescope washed out the data, or “noisier”-than-expected convection in the star's atmosphere made the oscillations unreadable. The possibility of using MOST to study stars' atmospheric churning “is, of course, itself interesting,” he adds. The MOST team revisited Procyon this year and plans to publish an analysis of the new measurements within a few months.

    Things went more smoothly this year, when MOST fixed its gaze on Eta Bootis. This time the data matched both stellar models and earlier ground-based observations. By comparing the data against a library of over 300,000 theoretical stellar models, Matthews and his team have pegged the star's age at 2.4 billion years, plus or minus 30 million years—about 10 times the precision of previous estimates. Studying a variety of sunlike stars with differences in mass, age, and composition will lead to better models, Christensen-Dalsgaard says.

    As a bonus, MOST's ability to measure exquisitely small variations in starlight enables it to double as an exoplanet explorer. At the meeting, the MOST team announced that the telescope had staked out an alien world around a far-off star and turned up subtle hints of an atmosphere and possible cloud cover. NASA's Spitzer Space Telescope had detected the infrared glow from exoplanet HD209458b in March. MOST tracked the subtle dip in optical brightness as the planet slipped behind its parent star during its orbit.

    By following the frequencies and amplitudes of the changes in stellar brightness, the team concluded that the planet is a gas giant 1.2 times as massive as Jupiter, parked less than 1/20 as far from its star as Earth is from the sun. Astronomers think HD209458b's low reflectance (less than 40%, compared with 50% for Jupiter) sets limits on the planet's atmosphere, in which the Hubble Space Telescope saw signs of carbon and oxygen in 2004. MOST will conduct a 45-day survey of the system later this summer with the hope of getting a clearer picture of the exoplanet's atmosphere and even its weather: temperature, pressure, and cloud cover.

    MOST's asteroseismological monopoly is destined to be short-lived. Similar satellites on the horizon include the European COROT (Convection, Rotation, and planetary Transits) mission, slated for launch in June 2006, and NASA's own planet seeker, Kepler, due in 2007. Unlike MOST, both satellites will be technologically capable of detecting Earth-size worlds. COROT's more sensitive detector will also be able to look at many stars simultaneously, rather than one at a time, as MOST does. But COROT and Kepler will focus on fainter stars than MOST observes, and their vision will be limited to smaller sections of the sky, Metcalfe says. As a result, he argues, during the tail end of its 5-year life span, MOST will complement the other missions and will not become obsolete when they come on line.

    Christensen-Dalsgaard agrees. “MOST is giving us the experience that we need to learn how stars behave photometrically and helps us learn how to choose targets for these later missions,” he says. “So in the next couple of years, we need to make the most out of MOST.”

    • * CASCA 2005, Montreal, Quebec, 15-17 May.


    Combating Radioactive Risks and Isolation in Tajikistan

    1. Richard Stone

    The science academy of this war-weary country is reaching out for help in tracking down lost radioactive sources—and restoring scientific vitality

    FAIZABAD, TAJIKISTAN—In the early 1990s, as civil war raged in this mountainous land, a terrorist's prize was here for the taking. Powerful radioactive sources lay buried in an open-air, gravel-covered pit on a compound ringed by a dilapidated concrete wall and chain-link fence. During the 5-year war, villagers and fighters pillaged nearby apple orchards and industrial sites. But the makings of dirty bombs—including radioisotopes such as cesium, cobalt, and americium in old Soviet gauges and other devices—remained untouched. “We were lucky,” says Gennady Krivopuskov, manager of the 6-hectare waste storage facility 50 kilometers northeast of the capital, Dushanbe. “Maybe the radiation hazard signs kept looters away.”

    How long the rad cops' luck will last is an open question. One or two derelict radioactive generators, which produce electricity from the heat harnessed from the decay of strontium-90, were never moved to this storage facility and remain unaccounted for, experts say. Each radioisotope thermoelectric generator (RTG) packs a whopping 40,000 curies—equivalent to the radioactivity from strontium-90 released during the 1986 Chornobyl explosion and fire. “How serious is it that they aren't secured? Well, that depends on who has them,” says a Western diplomat. Last month, a U.S. Department of Energy (DOE) team was in Dushanbe to train specialists at the Nuclear and Radiation Safety Agency of the Academy of Sciences of the Republic of Tajikistan (AST) on how to detect abandoned sources. Search efforts are about to get under way.

    Concern about RTGs as a serious proliferation threat first got attention 3 years ago, when the International Atomic Energy Agency (IAEA) in Vienna helped secure a pair of abandoned generators in the Republic of Georgia (Science, 1 February 2002, p. 777). IAEA has since learned that more than 1000 such generators were produced in the Soviet Union; the vast majority stayed in Russia, where they were used primarily to power Arctic lighthouses. But in recent years scores have gone astray or been vandalized for scrap metal. In Tajikistan, where the generators were used to power remote weather stations, four RTGs have been recovered and are awaiting transfer to Russia for disposal, says Ulmas Mirsaidov, director of the radiation safety agency. Although Mirsaidov told Science that all RTGs in Tajikistan are now secured, DOE officials and a Western diplomat in Dushanbe say that units are missing; one or two is the best estimate based on present information.


    Barriers have been upgraded at the Faizabad radwaste site, with help from the U.S. Department of Energy.


    Tajikistan's radiation agency is now working with IAEA to compile an inventory of radiological sources. “We're helping them make sense of their records and develop a search plan,” says Carolyn MacKenzie, a radiation source specialist with IAEA. There's no indication that any RTGs have fallen into the wrong hands. Still, there's a disconcerting lack of knowledge about where precisely to look. “When the Soviets left, the records weren't passed on,” MacKenzie says. “We don't have definite information,” adds Roman Khan, a health physicist at Argonne National Laboratory in Illinois. DOE's Search and Secure Program, Khan says, has provided Mirsaidov's agency with a suite of instruments—including a portable radiometer capable of detecting alpha and beta particles and gamma rays, a hand-held gamma ray spectrometer, and a broad energy germanium detector—for tracking down orphan sources.

    The hope is that the loose RTGs can be located and stored as soon as possible at the Faizabad facility, a hilly territory alive with discus-sized tortoises, a cacophony of sparrows, and a riot of bright-red poppies. A short walk up the road, through an inner fence patrolled by a machine gun-toting guard, is a whitewashed building with a massive gray steel door. Buried here, 9 meters beneath the dirt floor, are a variety of radioactive sources, including x-ray fluorescence instruments containing americium-241 that were used for geological surveys, radiotherapy canisters filled with cobalt-60, and four RTGs recovered so far.

    The repository was rebuilt last year with DOE and U.S. State Department support. The previous structure was frail indeed: On several occasions, high winds that sweep down from the mountains from September to March tore off the corrugated steel roof, says Krivopuskov, who after 26 years of service receives a salary of $12 per month. Thanks to the renovations, he claims, the sources “can stay here safely for 1000 years.” In the meantime, though, he and his colleagues must contemplate the fate of the sources that haven't yet been secured.


    Shock and Recovery

    1. Richard Stone

    DUSHANBE—The hunt for hot sources (see main text) is one of several challenges that the Academy of Sciences of the Republic of Tajikistan (AST) faces as it attempts to recover from a brutal civil war that followed the Soviet collapse. Some of the academy's prized assets, including a cosmic-ray physics laboratory, astronomical observatories, and a network of seismic stations, emerged surprisingly intact. But lingering memories of the civil war and ongoing concerns about Tajikistan's anemic law enforcement—including an unsolved car bombing outside a government building last January—have put a damper on international cooperation.

    During the Cold War, the Soviets bankrolled some high-profile Tajikistani projects. The Soviet Equatorial Meteor Expedition from 1968 to 1970, organized by AST's Institute of Astrophysics, painted a detailed picture of meteor bombardment of Earth and wind patterns in the upper atmosphere. And in 1963, the Institute of Earthquake Engineering and Seismology inaugurated the Lyaur testing range, a unique facility where artificial earthquakes—simulated with explosives—probed the durability of full-scale model buildings constructed from novel seismic-resistant materials and designs.

    By the early 1990s, however, most scientific activity in Tajikistan, the poorest of the 15 nations born from the old Soviet Union, had ground to a halt. During the worst years of the civil war, in 1992 and 1993, food was scarce, power outages frequent, public transportation virtually nonexistent, and the water supply and telephone lines unreliable. “Yet we came to work every day,” says Alla Aslitdinova, director of AST's central library. “I can't explain why.” Thefts were commonplace. “People stole our computers and other equipment,” says Khursand Ibadinov, director of the astrophysics institute. “Fortunately, they left the telescopes,” he says, including a 40-centimeter Zeiss astrograph at the Hissar observatory near Dushanbe.

    Star trackers.

    Khursand Ibadinov, director of the astrophysics institute (left), and academician Pulat Bobojonov with a high-precision astrometry telescope at the Hissar observatory.


    Shelling, gunfire, and penury were not the only problems. The Russian government asserted ownership of the Murgab cosmic-ray research station, perched in the Pamir Mountains northeast of Dushanbe. Because of the dispute—which shows no signs of ending—“for 14 years no experiments have been carried out there,” says AST president Mamadsho Ilolov, a mathematician.

    The country's seismic stations, meanwhile, require an extensive upgrade from analog to digital instruments. But the investment would be worth it, says David Simpson, president of IRIS, a university seismological consortium based in Washington, D.C.: It could sharpen the monitoring of regional seismic hazards and help probe fundamental questions such as the geological structure of the Pamirs. Simpson led a seismological project at Nurek reservoir in Tajikistan from 1975 into the early 1980s. “Even under Soviet power at that time, it was a wonderful, friendly, and beautiful place to live and work,” he says.

    AST researchers hope to soon end the isolation that has cocooned them since the civil war. “I have a dream to start academic and student exchanges” with U.S. universities, says Aslitdinova, who spent 4 months as a Fulbright scholar late last year at Northwestern University in Evanston, Illinois. It's a dream many Tajikistanis share, but one that will be a struggle to make come true.

  18. In Praise of Hard Questions

    1. Tom Siegfried*
    1. Tom Siegfried is the author of Strange Matters and The Bit and the Pendulum.

    Great cases, as U.S. Supreme Court Justice Oliver Wendell Holmes suggested a century ago, may make bad law. But great questions often make very good science.

    Unsolved mysteries provide science with motivation and direction. Gaps in the road to scientific knowledge are not potholes to be avoided, but opportunities to be exploited.

    “Fundamental questions are guideposts; they stimulate people,” says 2004 Nobel physics laureate David Gross. “One of the most creative qualities a research scientist can have is the ability to ask the right questions.”

    Science's greatest advances occur on the frontiers, at the interface between ignorance and knowledge, where the most profound questions are posed. There's no better way to assess the current condition of science than listing the questions that science cannot answer. “Science,” Gross declares, “is shaped by ignorance.”

    There have been times, though, when some believed that science had paved over all the gaps, ending the age of ignorance. When Science was born, in 1880, James Clerk Maxwell had died just the year before, after successfully explaining light, electricity, magnetism, and heat. Along with gravity, which Newton had mastered 2 centuries earlier, physics was, to myopic eyes, essentially finished. Darwin, meanwhile, had established the guiding principle of biology, and Mendeleyev's periodic table—only a decade old—allowed chemistry to publish its foundations on a poster board. Maxwell himself mentioned that many physicists believed the trend in their field was merely to measure the values of physical constants “to another place of decimals.”

    Nevertheless, great questions raged. Savants of science debated not only the power of natural selection, but also the origin of the solar system, the age and internal structure of Earth, and the prospect of a plurality of worlds populating the cosmos.

    In fact, at the time of Maxwell's death, his theory of electromagnetic fields was not yet widely accepted or even well known; experts still argued about whether electricity and magnetism propagated their effects via “action at a distance,” as gravity (supposedly) did, or by Michael Faraday's “lines of force” (incorporated by Maxwell into his fields). Lurking behind that dispute was the deeper issue of whether gravity could be unified with electromagnetism (Maxwell thought not), a question that remains one of the greatest in science today, in a somewhat more complicated form.

    Maxwell knew full well that his accomplishments left questions unanswered. His calculations regarding the internal motion of molecules did not agree with measurements of specific heats, for instance. “Something essential to the complete state of the physical theory of molecular encounters must have hitherto escaped us,” he commented.

    When Science turned 20—at the 19th century's end—Maxwell's mentor William Thomson (Lord Kelvin) articulated the two grand gaps in knowledge of the day. (He called them “clouds” hanging over physicists' heads.) One was the mystery of specific heats that Maxwell had identified; the other was the failure to detect the ether, a medium seemingly required by Maxwell's electromagnetic waves.

    Filling those two gaps in knowledge required the 20th century's quantum and relativity revolutions. The ignorance enveloped in Kelvin's clouds was the impetus for science's revitalization.

    Throughout the last century, pursuing answers to great questions reshaped human understanding of the physical and living world. Debates over the plurality of worlds assumed galactic proportions, specifically addressing whether Earth's home galaxy, the Milky Way, was only one of many such conglomerations of stars. That issue was soon resolved in favor of the Milky Way's nonexclusive status, in much the same manner that Earth itself had been demoted from its central role in the cosmos by Copernicus centuries before.

    But the existence of galaxies outside our own posed another question, about the apparent motions of those galaxies away from one another. That issue echoed a curious report in Science's first issue about a set of stars forming a triangular pattern, with a double star at the apex and two others forming the base. Precise observations showed the stars to be moving apart, making the triangle bigger but maintaining its form.

    “It seems probable that all these stars are slowly moving away from one common point, so that many years back they were all very much closer to one another,” Science reported, as though the four stars had all begun their journey from the same place. Understanding such motion was a question “of the highest interest.”

    A half a century later, Edwin Hubble enlarged that question from one about stellar motion to the origin and history of the universe itself. He showed that galaxies also appeared to be receding from a common starting point, evidence that the universe was expanding. With Hubble's discovery, cosmology's grand questions began to morph from the philosophical to the empirical. And with the discovery of the cosmic microwave background in the 1960s, the big bang theory of the universe's birth assumed the starring role on the cosmological stage—providing cosmologists with one big answer and many new questions.


    By Science's centennial, a quarter-century ago, many gaps still remained in knowledge of the cosmos; some of them have since been filled, while others linger. At that time debate continued over the existence of planets around faraway stars, a question now settled with the discovery of dozens of planets in the solar system's galactic neighborhood. But now a bigger question looms beyond the scope of planets or even galaxies: the prospect of multiple universes, cousins to the bubble of time and space that humans occupy.

    And not only may the human universe not be alone (defying the old definition of universe), humans may not be alone in their own space, either. The possible existence of life elsewhere in the cosmos remains as great a gap as any in present-day knowledge. And it is enmeshed with the equally deep mystery of life's origin on Earth.

    Life, of course, inspires many deep questions, from the prospects for immortality to the prognosis for eliminating disease. Scientists continue to wonder whether they will ever be able to create new life forms from scratch, or at least simulate life's self-assembling capabilities. Biologists, physicists, mathematicians, and computer scientists have begun cooperating on a sophisticated “systems biology” aimed at understanding how the countless molecular interactions at the heart of life fit together in the workings of cells, organs, and whole animals. And if successful, the systems approach should help doctors tailor treatments to individual variations in DNA, permitting personalized medicine that deters disease without inflicting side effects. Before Science turns 150, revamped versions of modern medicine may make it possible for humans to live that long, too.

    As Science and science age, knowledge and ignorance have coevolved, and the nature of the great questions sometimes changes. Old questions about the age and structure of the Earth, for instance, have given way to issues concerning the planet's capacity to support a growing and aging population.

    Some great questions get bigger over time, encompassing an ever-expanding universe, or become more profound, such as the quest to understand consciousness. On the other hand, many deep questions drive science to smaller scales, more minute than the realm of atoms and molecules, or to a greater depth of detail underlying broad-brush answers to past big questions. In 1880, some scientists remained unconvinced by Maxwell's evidence for atoms. Today, the analogous debate focuses on superstrings as the ultimate bits of matter, on a scale a trillion trillion times smaller. Old arguments over evolution and natural selection have descended to debates on the dynamics of speciation, or how particular behaviors, such as altruistic cooperation, have emerged from the laws of individual competition.


    Great questions themselves evolve, of course, because their answers spawn new and better questions in turn. The solutions to Kelvin's clouds—relativity and quantum physics—generated many of the mysteries on today's list, from the composition of the cosmos to the prospect for quantum computers.

    Ultimately, great questions like these both define the state of scientific knowledge and drive the engines of scientific discovery. Where ignorance and knowledge converge, where the known confronts the unknown, is where scientific progress is most dramatically made. “Thoroughly conscious ignorance,” wrote Maxwell, “is the prelude to every real advance in science.”

    So when science runs out of questions, it would seem, science will come to an end. But there's no real danger of that. The highway from ignorance to knowledge runs both ways: As knowledge accumulates, diminishing the ignorance of the past, new questions arise, expanding the areas of ignorance to explore.

    Maxwell knew that even an era of precision measurements is not a sign of science's end but preparation for the opening of new frontiers. In every branch of science, Maxwell declared, “the labor of careful measurement has been rewarded by the discovery of new fields of research and by the development of new scientific ideas.”

    If science's progress seems to slow, it's because its questions get increasingly difficult, not because there will be no new questions left to answer.

    Fortunately, hard questions also can make great science, just as Justice Holmes noted that hard cases, like great cases, made bad law. Bad law resulted, he said, because emotional concerns about celebrated cases exerted pressures that distorted well-established legal principles. And that's why the situation in science is the opposite of that in law. The pressures of the great, hard questions bend and even break well-established principles, which is what makes science forever self-renewing—and which is what demolishes the nonsensical notion that science's job will ever be done.

  19. What Is the Universe Made Of?

    1. Charles Seife

    Every once in a while, cosmologists are dragged, kicking and screaming, into a universe much more unsettling than they had any reason to expect. In the 1500s and 1600s, Copernicus, Kepler, and Newton showed that Earth is just one of many planets orbiting one of many stars, destroying the comfortable Medieval notion of a closed and tiny cosmos. In the 1920s, Edwin Hubble showed that our universe is constantly expanding and evolving, a finding that eventually shattered the idea that the universe is unchanging and eternal. And in the past few decades, cosmologists have discovered that the ordinary matter that makes up stars and galaxies and people is less than 5% of everything there is. Grappling with this new understanding of the cosmos, scientists face one overriding question: What is the universe made of?

    This question arises from years of progressively stranger observations. In the 1960s, astronomers discovered that galaxies spun around too fast for the collective pull of the stars' gravity to keep them from flying apart. Something unseen appears to be keeping the stars from flinging themselves away from the center: unilluminated matter that exerts extra gravitational force. This is dark matter.

    Over the years, scientists have spotted some of this dark matter in space; they have seen ghostly clouds of gas with x-ray telescopes, watched the twinkle of distant stars as invisible clumps of matter pass in front of them, and measured the distortion of space and time caused by invisible mass in galaxies. And thanks to observations of the abundances of elements in primordial gas clouds, physicists have concluded that only 10% of ordinary matter is visible to telescopes.

    In the dark.

    Dark matter holds galaxies together; supernovae measurements point to a mysterious dark energy.


    But even multiplying all the visible “ordinary” matter by 10 doesn't come close to accounting for how the universe is structured. When astronomers look up in the heavens with powerful telescopes, they see a lumpy cosmos. Galaxies don't dot the skies uniformly; they cluster together in thin tendrils and filaments that twine among vast voids. Just as there isn't enough visible matter to keep galaxies spinning at the right speed, there isn't enough ordinary matter to account for this lumpiness. Cosmologists now conclude that the gravitational forces exerted by another form of dark matter, made of an as-yet-undiscovered type of particle, must be sculpting these vast cosmic structures. They estimate that this exotic dark matter makes up about 25% of the stuff in the universe—five times as much as ordinary matter.

    But even this mysterious entity pales by comparison to another mystery: dark energy. In the late 1990s, scientists examining distant supernovae discovered that the universe is expanding faster and faster, instead of slowing down as the laws of physics would imply. Is there some sort of antigravity force blowing the universe up?

    All signs point to yes. Independent measurements of a variety of phenomena—cosmic background radiation, element abundances, galaxy clustering, gravitational lensing, gas cloud properties—all converge on a consistent, but bizarre, picture of the cosmos. Ordinary matter and exotic, unknown particles together make up only about 30% of the stuff in the universe; the rest is this mysterious anti-gravity force known as dark energy.

    This means that figuring out what the universe is made of will require answers to three increasingly difficult sets of questions. What is ordinary dark matter made of, and where does it reside? Astrophysical observations, such as those that measure the bending of light by massive objects in space, are already yielding the answer. What is exotic dark matter? Scientists have some ideas, and with luck, a dark-matter trap buried deep underground or a high-energy atom smasher will discover a new type of particle within the next decade. And finally, what is dark energy? This question, which wouldn't even have been asked a decade ago, seems to transcend known physics more than any other phenomenon yet observed. Ever-better measurements of supernovae and cosmic background radiation as well as planned observations of gravitational lensing will yield information about dark energy's “equation of state”—essentially a measure of how squishy the substance is. But at the moment, the nature of dark energy is arguably the murkiest question in physics—and the one that, when answered, may shed the most light.

  20. So Much More to Know …

    From the nature of the cosmos to the nature of societies, the following 100 questions span the sciences. Some are pieces of questions discussed above; others are big questions in their own right. Some will drive scientific inquiry for the next century; others may soon be answered. Many will undoubtedly spawn new questions.

    Is ours the only universe?

    A number of quantum theorists and cosmologists are trying to figure out whether our universe is part of a bigger “multiverse.” But others suspect that this hard-to-test idea may be a question for philosophers.

    What drove cosmic inflation?

    In the first moments after the big bang, the universe blew up at an incredible rate. But what did the blowing? Measurements of the cosmic microwave background and other astrophysical observations are narrowing the possibilities.

    When and how did the first stars and galaxies form?

    The broad brush strokes are visible, but the fine details aren't. Data from satellites and ground-based telescopes may soon help pinpoint, among other particulars, when the first generation of stars burned off the hydrogen “fog” that filled the universe.

    Where do ultrahigh-energy cosmic rays come from?

    Above a certain energy, cosmic rays don't travel very far before being destroyed. So why are cosmic-ray hunters spotting such rays with no obvious source within our galaxy?

    What powers quasars?

    The mightiest energy fountains in the universe probably get their power from matter plunging into whirling supermassive black holes. But the details of what drives their jets remain anybody's guess.

    What is the nature of black holes?

    Relativistic mass crammed into a quantum-sized object? It's a recipe for disaster—and scientists are still trying to figure out the ingredients.

    Why is there more matter than antimatter?

    To a particle physicist, matter and antimatter are almost the same. Some subtle difference must explain why matter is common and antimatter rare.

    Does the proton decay?

    In a theory of everything, quarks (which make up protons) should somehow be convertible to leptons (such as electrons)—so catching a proton decaying into something else might reveal new laws of particle physics.

    What is the nature of gravity?

    It clashes with quantum theory. It doesn't fit in the Standard Model. Nobody has spotted the particle that is responsible for it. Newton's apple contained a whole can of worms.

    Why is time different from other dimensions?

    It took millennia for scientists to realize that time is a dimension, like the three spatial dimensions, and that time and space are inextricably linked. The equations make sense, but they don't satisfy those who ask why we perceive a “now” or why time seems to flow the way it does.

    Are there smaller building blocks than quarks?

    Atoms were “uncuttable.” Then scientists discovered protons, neutrons, and other subatomic particles—which were, in turn, shown to be made up of quarks and gluons. Is there something more fundamental still?

    Are neutrinos their own antiparticles?

    Nobody knows this basic fact about neutrinos, although a number of underground experiments are under way. Answering this question may be a crucial step to understanding the origin of matter in the universe.

    Is there a unified theory explaining all correlated electron systems?

    High-temperature superconductors and materials with giant and colossal magnetoresistance are all governed by the collective rather than individual behavior of electrons. There is currently no common framework for understanding them.

    What is the most powerful laser researchers can build?

    Theorists say an intense enough laser field would rip photons into electron-positron pairs, dousing the beam. But no one knows whether it's possible to reach that point.

    Can researchers make a perfect optical lens?

    They've done it with microwaves but never with visible light.

    Is it possible to create magnetic semiconductors that work at room temperature?

    Such devices have been demonstrated at low temperatures but not yet in a range warm enough for spintronics applications.

    What is the pairing mechanism behind high-temperature superconductivity?

    Electrons in superconductors surf together in pairs. After 2 decades of intense study, no one knows what holds them together in the complex, high-temperature materials.

    Can we develop a general theory of the dynamics of turbulent flows and the motion of granular materials?

    So far, such “nonequilibrium systems” defy the tool kit of statistical mechanics, and the failure leaves a gaping hole in physics.

    Are there stable high-atomic-number elements?

    A superheavy element with 184 neutrons and 114 protons should be relatively stable, if physicists can create it.

    Is superfluidity possible in a solid? If so, how?

    Despite hints in solid helium, nobody is sure whether a crystalline material can flow without resistance. If new types of experiments show that such outlandish behavior is possible, theorists would have to explain how.

    What is the structure of water?

    Researchers continue to tussle over how many bonds each H2O molecule makes with its nearest neighbors.

    What is the nature of the glassy state?

    Molecules in a glass are arranged much like those in liquids but are more tightly packed. Where and why does liquid end and glass begin?

    Are there limits to rational chemical synthesis?

    The larger synthetic molecules get, the harder it is to control their shapes and make enough copies of them to be useful. Chemists will need new tools to keep their creations growing.

    What is the ultimate efficiency of photovoltaic cells?

    Conventional solar cells top out at converting 32% of the energy in sunlight to electricity. Can researchers break through the barrier?

    Will fusion always be the energy source of the future?

    It's been 35 years away for about 50 years, and unless the international community gets its act together, it'll be 35 years away for many decades to come.

    What drives the solar magnetic cycle?

    Scientists believe differing rates of rotation from place to place on the sun underlie its 22-year sunspot cycle. They just can't make it work in their simulations. Either a detail is askew, or it's back to the drawing board.

    How do planets form?

    How bits of dust and ice and gobs of gas came together to form the planets without the sun devouring them all is still unclear. Planetary systems around other stars should provide clues.

    What causes ice ages?

    Something about the way the planet tilts, wobbles, and careens around the sun presumably brings on ice ages every 100,000 years or so, but reams of climate records haven't explained exactly how.

    What causes reversals in Earth's magnetic field?

    Computer models and laboratory experiments are generating new data on how Earth's magnetic poles might flip-flop. The trick will be matching simulations to enough aspects of the magnetic field beyond the inaccessible core to build a convincing case.

    Are there earthquake precursors that can lead to useful predictions?

    Prospects for finding signs of an imminent quake have been waning since the 1970s. Understanding faults will progress, but routine prediction would require an as-yet-unimagined breakthrough.

    Is there—or was there—life elsewhere in the solar system?

    The search for life—past or present—on other planetary bodies now drives NASA's planetary exploration program, which focuses on Mars, where water abounded when life might have first arisen.

    What is the origin of homochirality in nature?

    Most biomolecules can be synthesized in mirror-image shapes. Yet in organisms, amino acids are always left-handed, and sugars are always right-handed. The origins of this preference remain a mystery.

    Can we predict how proteins will fold?

    Out of a near infinitude of possible ways to fold, a protein picks one in just tens of microseconds. The same task takes 30 years of computer time.

    How many proteins are there in humans?

    It has been hard enough counting genes. Proteins can be spliced in different ways and decorated with numerous functional groups, all of which makes counting their numbers impossible for now.

    How do proteins find their partners?

    Protein-protein interactions are at the heart of life. To understand how partners come together in precise orientations in seconds, researchers need to know more about the cell's biochemistry and structural organization.

    How many forms of cell death are there?

    In the 1970s, apoptosis was finally recognized as distinct from necrosis. Some biologists now argue that the cell death story is even more complicated. Identifying new ways cells die could lead to better treatments for cancer and degenerative diseases.

    What keeps intracellular traffic running smoothly?

    Membranes inside cells transport key nutrients around, and through, various cell compartments without sticking to each other or losing their way. Insights into how membranes stay on track could help conquer diseases, such as cystic fibrosis.

    What enables cellular components to copy themselves independent of DNA?

    Centrosomes, which help pull apart paired chromosomes, and other organelles replicate on their own time, without DNA's guidance. This independence still defies explanation.

    What roles do different forms of RNA play in genome function?

    RNA is turning out to play a dizzying assortment of roles, from potentially passing genetic information to offspring to muting gene expression. Scientists are scrambling to decipher this versatile molecule.

    What role do telomeres and centromeres play in genome function?

    These chromosome features will remain mysteries until new technologies can sequence them.

    Why are some genomes really big and others quite compact?

    The puffer fish genome is 400 million bases; one lungfish's is 133 billion bases long. Repetitive and duplicated DNA don't explain why this and other size differences exist.

    What is all that “junk” doing in our genomes?

    DNA between genes is proving important for genome function and the evolution of new species. Comparative sequencing, microarray studies, and lab work are helping genomicists find a multitude of genetic gems amid the junk.

    How much will new technologies lower the cost of sequencing?

    New tools and conceptual breakthroughs are driving the cost of DNA sequencing down by orders of magnitude. The reductions are enabling research from personalized medicine to evolutionary biology to thrive.

    How do organs and whole organisms know when to stop growing?

    A person's right and left legs almost always end up the same length, and the hearts of mice and elephants each fit the proper rib cage. How genes set limits on cell size and number continues to mystify.

    How can genome changes other than mutations be inherited?

    Researchers are finding ever more examples of this process, called epigenetics, but they can't explain what causes and preserves the changes.

    How is asymmetry determined in the embryo?

    Whirling cilia help an embryo tell its left from its right, but scientists are still looking for the first factors that give a relatively uniform ball of cells a head, tail, front, and back.

    How do limbs, fins, and faces develop and evolve?

    The genes that determine the length of a nose or the breadth of a wing are subject to natural and sexual selection. Understanding how selection works could lead to new ideas about the mechanics of evolution with respect to development.

    What triggers puberty?

    Nutrition—including that received in utero—seems to help set this mysterious biological clock, but no one knows exactly what forces childhood to end.

    Are stem cells at the heart of all cancers?

    The most aggressive cancer cells look a lot like stem cells. If cancers are caused by stem cells gone awry, studies of a cell's “stemness” may lead to tools that could catch tumors sooner and destroy them more effectively.

    Is cancer susceptible to immune control?

    Although our immune responses can suppress tumor growth, tumor cells can combat those responses with counter-measures. This defense can stymie researchers hoping to develop immune therapies against cancer.

    Can cancers be controlled rather than cured?

    Drugs that cut off a tumor's fuel supplies—say, by stopping blood-vessel growth—can safely check or even reverse tumor growth. But how long the drugs remain effective is still unknown.

    Is inflammation a major factor in all chronic diseases?

    It's a driver of arthritis, but cancer and heart disease? More and more, the answer seems to be yes, and the question remains why and how.

    How do prion diseases work?

    Even if one accepts that prions are just misfolded proteins, many mysteries remain. How can they go from the gut to the brain, and how do they kill cells once there, for example.

    How much do vertebrates depend on the innate immune system to fight infection?

    This system predates the vertebrate adaptive immune response. Its relative importance is unclear, but immunologists are working to find out.

    Does immunologic memory require chronic exposure to antigens?

    Yes, say a few prominent thinkers, but experiments with mice now challenge the theory. Putting the debate to rest would require proving that something is not there, so the question likely will not go away.

    Why doesn't a pregnant woman reject her fetus?

    Recent evidence suggests that the mother's immune system doesn't “realize” that the fetus is foreign even though it gets half its genes from the father. Yet just as Nobelist Peter Medawar said when he first raised this question in 1952, “the verdict has yet to be returned.”

    What synchronizes an organism's circadian clocks?

    Circadian clock genes have popped up in all types of creatures and in many parts of the body. Now the challenge is figuring out how all the gears fit together and what keeps the clocks set to the same time.

    How do migrating organisms find their way?

    Birds, butterflies, and whales make annual journeys of thousands of kilometers. They rely on cues such as stars and magnetic fields, but the details remain unclear.

    Why do we sleep?

    A sound slumber may refresh muscles and organs or keep animals safe from dangers lurking in the dark. But the real secret of sleep probably resides in the brain, which is anything but still while we're snoring away.

    Why do we dream?

    Freud thought dreaming provides an outlet for our unconscious desires. Now, neuroscientists suspect that brain activity during REM sleep—when dreams occur—is crucial for learning. Is the experience of dreaming just a side effect?

    Why are there critical periods for language learning?

    Monitoring brain activity in young children—including infants—may shed light on why children pick up languages with ease while adults often struggle to learn train station basics in a foreign tongue.

    Do pheromones influence human behavior?

    Many animals use airborne chemicals to communicate, particularly when mating. Controversial studies have hinted that humans too use pheromones. Identifying them will be key to assessing their sway on our social lives.

    How do general anesthetics work?

    Scientists are chipping away at the drugs' effects on individual neurons, but understanding how they render us unconscious will be a tougher nut to crack.

    What causes schizophrenia?

    Researchers are trying to track down genes involved in this disorder. Clues may also come from research on traits schizophrenics share with normal people.

    What causes autism?

    Many genes probably contribute to this baffling disorder, as well as unknown environmental factors. A biomarker for early diagnosis would help improve existing therapy, but a cure is a distant hope.

    To what extent can we stave off Alzheimer's?

    A 5- to 10-year delay in this late-onset disease would improve old age for millions. Researchers are determining whether treatments with hormones or antioxidants, or mental and physical exercise, will help.

    What is the biological basis of addiction?

    Addiction involves the disruption of the brain's reward circuitry. But personality traits such as impulsivity and sensation-seeking also play a part in this complex behavior.

    Is morality hardwired into the brain?

    That question has long puzzled philosophers; now some neuroscientists think brain imaging will reveal circuits involved in reasoning.

    What are the limits of learning by machines?

    Computers can already beat the world's best chess players, and they have a wealth of information on the Web to draw on. But abstract reasoning is still beyond any machine.

    How much of personality is genetic?

    Aspects of personality are influenced by genes; environment modifies the genetic effects. The relative contributions remain under debate.

    What is the biological root of sexual orientation?

    Much of the “environmental” contribution to homosexuality may occur before birth in the form of prenatal hormones, so answering this question will require more than just the hunt for “gay genes.”

    Will there ever be a tree of life that systematists can agree on?

    Despite better morphological, molecular, and statistical methods, researchers' trees don't agree. Expect greater, but not complete, consensus.

    How many species are there on Earth?

    Count all the stars in the sky? Impossible. Count all the species on Earth? Ditto. But the biodiversity crisis demands that we try.

    What is a species?

    A “simple” concept that's been muddied by evolutionary data; a clear definition may be a long time in coming.

    Why does lateral transfer occur in so many species and how?

    Once considered rare, gene swapping, particularly among microbes, is proving quite common. But why and how genes are so mobile—and the effect on fitness—remains to be determined.

    Who was LUCA (the last universal common ancestor)?

    Ideas about the origin of the 1.5-billion-year-old “mother” of all complex organisms abound. The continued discovery of primitive microbes, along with comparative genomics, should help resolve life's deep past.

    How did flowers evolve?

    Darwin called this question an “abominable mystery.” Flowers arose in the cycads and conifers, but the details of their evolution remain obscure.

    How do plants make cell walls?

    Cellulose and pectin walls surround cells, keeping water in and supporting tall trees. The biochemistry holds the secrets to turning its biomass into fuel.

    How is plant growth controlled?

    Redwoods grow to be hundreds of meters tall, Arctic willows barely 10 centimeters. Understanding the difference could lead to higher-yielding crops.

    Why aren't all plants immune to all diseases?

    Plants can mount a general immune response, but they also maintain molecular snipers that take out specific pathogens. Plant pathologists are asking why different species, even closely related ones, have different sets of defenders. The answer could result in hardier crops.

    What is the basis of variation in stress tolerance in plants?

    We need crops that better withstand drought, cold, and other stresses. But there are so many genes involved, in complex interactions, that no one has yet figured out which ones work how.

    What caused mass extinctions?

    A huge impact did in the dinosaurs, but the search for other catastrophic triggers of extinction has had no luck so far. If more subtle or stealthy culprits are to blame, they will take considerably longer to find.

    Can we prevent extinction?

    Finding cost-effective and politically feasible ways to save many endangered species requires creative thinking.

    Why were some dinosaurs so large?

    Dinosaurs reached almost unimaginable sizes, some in less than 20 years. But how did the long-necked sauropods, for instance, eat enough to pack on up to 100 tons without denuding their world?

    How will ecosystems respond to global warming?

    To anticipate the effects of the intensifying greenhouse, climate modelers will have to focus on regional changes and ecologists on the right combination of environmental changes.

    How many kinds of humans coexisted in the recent past, and how did they relate?

    The new dwarf human species fossil from Indonesia suggests that at least four kinds of humans thrived in the past 100,000 years. Better dates and additional material will help confirm or revise this picture.

    What gave rise to modern human behavior?

    Did Homo sapiens acquire abstract thought, language, and art gradually or in a cultural “big bang,” which in Europe occurred about 40,000 years ago? Data from Africa, where our species arose, may hold the key to the answer.

    What are the roots of human culture?

    No animal comes close to having humans' ability to build on previous discoveries and pass the improvements on. What determines those differences could help us understand how human culture evolved.

    What are the evolutionary roots of language and music?

    Neuroscientists exploring how we speak and make music are just beginning to find clues as to how these prized abilities arose.

    What are human races, and how did they develop?

    Anthropologists have long argued that race lacks biological reality. But our genetic makeup does vary with geographic origin and as such raises political and ethical as well as scientific questions.

    Why do some countries grow and others stagnate?

    From Norway to Nigeria, living standards across countries vary enormously, and they're not becoming more equal.

    What impact do large government deficits have on a country's interest rates and economic growth rate?

    The United States could provide a test case.

    Are political and economic freedom closely tied?

    China may provide one answer.

    Why has poverty increased and life expectancy declined in sub-Saharan Africa?

    Almost all efforts to reduce poverty in sub-Saharan Africa have failed. Figuring out what will work is crucial to alleviating massive human suffering.

    The following six mathematics questions are drawn from a list of seven outstanding problems selected by the Clay Mathematics Institute. (The seventh problem is discussed on p.96.) For more details, go to

    Is there a simple test for determining whether an elliptic curve has an infinite number of rational solutions?

    Equations of the form y2 = x3 + ax + b are powerful mathematical tools. The Birch and Swinnerton-Dyer conjecture tells how to determine how many solutions they have in the realm of rational numbers—information that could solve a host of problems, if the conjecture is true.

    Can a Hodge cycle be written as a sum of algebraic cycles?

    Two useful mathematical structures arose independently in geometry and in abstract algebra. The Hodge conjecture posits a surprising link between them, but the bridge remains to be built.

    Will mathematicians unleash the power of the Navier-Stokes equations?

    First written down in the 1840s, the equations hold the keys to understanding both smooth and turbulent flow. To harness them, though, theorists must find out exactly when they work and under what conditions they break down.

    Does Poincaré's test identify spheres in four-dimensional space?

    You can tie a string around a doughnut, but it will slide right off a sphere. The mathematical principle behind that observation can reliably spot every spherelike object in 3D space. Henri Poincaré conjectured that it should also work in the next dimension up, but no one has proved it yet.

    Do mathematically interesting zero-value solutions of the Riemann zeta function all have the form a + bi?

    Don't sweat the details. Since the mid-19th century, the “Riemann hypothesis” has been the monster catfish in mathematicians' pond. If true, it will give them a wealth of information about the distribution of prime numbers and other long-standing mysteries.

    Does the Standard Model of particle physics rest on solid mathematical foundations?

    For almost 50 years, the model has rested on “quantum Yang-Mills theory,” which links the behavior of particles to structures found in geometry. The theory is breathtakingly elegant and useful—but no one has proved that it's sound.

  21. What Is the Biological Basis of Consciousness?

    1. Greg Miller

    For centuries, debating the nature of consciousness was the exclusive purview of philosophers. But if the recent torrent of books on the topic is any indication, a shift has taken place: Scientists are getting into the game.

    Has the nature of consciousness finally shifted from a philosophical question to a scientific one that can be solved by doing experiments? The answer, as with any related to this topic, depends on whom you ask. But scientific interest in this slippery, age-old question seems to be gathering momentum. So far, however, although theories abound, hard data are sparse.

    The discourse on consciousness has been hugely influenced by René Descartes, the French philosopher who in the mid-17th century declared that body and mind are made of different stuff entirely. It must be so, Descartes concluded, because the body exists in both time and space, whereas the mind has no spatial dimension.

    Recent scientifically oriented accounts of consciousness generally reject Descartes's solution; most prefer to treat body and mind as different aspects of the same thing. In this view, consciousness emerges from the properties and organization of neurons in the brain. But how? And how can scientists, with their devotion to objective observation and measurement, gain access to the inherently private and subjective realm of consciousness?

    Some insights have come from examining neurological patients whose injuries have altered their consciousness. Damage to certain evolutionarily ancient structures in the brainstem robs people of consciousness entirely, leaving them in a coma or a persistent vegetative state. Although these regions may be a master switch for consciousness, they are unlikely to be its sole source. Different aspects of consciousness are probably generated in different brain regions. Damage to visual areas of the cerebral cortex, for example, can produce strange deficits limited to visual awareness. One extensively studied patient, known as D.F., is unable to identify shapes or determine the orientation of a thin slot in a vertical disk. Yet when asked to pick up a card and slide it through the slot, she does so easily. At some level, D.F. must know the orientation of the slot to be able to do this, but she seems not to know she knows.

    Cleverly designed experiments can produce similar dissociations of unconscious and conscious knowledge in people without neurological damage. And researchers hope that scanning the brains of subjects engaged in such tasks will reveal clues about the neural activity required for conscious awareness. Work with monkeys also may elucidate some aspects of consciousness, particularly visual awareness. One experimental approach is to present a monkey with an optical illusion that creates a “bistable percept,” looking like one thing one moment and another the next. (The orientation-flipping Necker cube is a well-known example.) Monkeys can be trained to indicate which version they perceive. At the same time, researchers hunt for neurons that track the monkey's perception, in hopes that these neurons will lead them to the neural systems involved in conscious visual awareness and ultimately to an explanation of how a particular pattern of photons hitting the retina produces the experience of seeing, say, a rose.


    Experiments under way at present generally address only pieces of the consciousness puzzle, and very few directly address the most enigmatic aspect of the conscious human mind: the sense of self. Yet the experimental work has begun, and if the results don't provide a blinding insight into how consciousness arises from tangles of neurons, they should at least refine the next round of questions.

    Ultimately, scientists would like to understand not just the biological basis of consciousness but also why it exists. What selection pressure led to its development, and how many of our fellow creatures share it? Some researchers suspect that consciousness is not unique to humans, but of course much depends on how the term is defined. Biological markers for consciousness might help settle the matter and shed light on how consciousness develops early in life. Such markers could also inform medical decisions about loved ones who are in an unresponsive state.

    Until fairly recently, tackling the subject of consciousness was a dubious career move for any scientist without tenure (and perhaps a Nobel Prize already in the bag). Fortunately, more young researchers are now joining the fray. The unanswered questions should keep them—and the printing presses—busy for many years to come.

  22. Why Do Humans Have So Few Genes?

    1. Elizabeth Pennisi

    When leading biologists were unraveling the sequence of the human genome in the late 1990s, they ran a pool on the number of genes contained in the 3 billion base pairs that make up our DNA. Few bets came close. The conventional wisdom a decade or so ago was that we need about 100,000 genes to carry out the myriad cellular processes that keep us functioning. But it turns out that we have only about 25,000 genes—about the same number as a tiny flowering plant called Arabidopsis and barely more than the worm Caenorhabditis elegans.

    That big surprise reinforced a growing realization among geneticists: Our genomes and those of other mammals are far more flexible and complicated than they once seemed. The old notion of one gene/one protein has gone by the board: It is now clear that many genes can make more than one protein. Regulatory proteins, RNA, noncoding bits of DNA, even chemical and structural alterations of the genome itself control how, where, and when genes are expressed. Figuring out how all these elements work together to choreograph gene expression is one of the central challenges facing biologists.

    In the past few years, it has become clear that a phenomenon called alternative splicing is one reason human genomes can produce such complexity with so few genes. Human genes contain both coding DNA—exons—and noncoding DNA. In some genes, different combinations of exons can become active at different times, and each combination yields a different protein. Alternative splicing was long considered a rare hiccup during transcription, but researchers have concluded that it may occur in half—some say close to all—of our genes. That finding goes a long way toward explaining how so few genes can produce hundreds of thousands of different proteins. But how the transcription machinery decides which parts of a gene to read at any particular time is still largely a mystery.

    The same could be said for the mechanisms that determine which genes or suites of genes are turned on or off at particular times and places. Researchers are discovering that each gene needs a supporting cast of hundreds to get its job done. They include proteins that shut down or activate a gene, for example by adding acetyl or methyl groups to the DNA. Other proteins, called transcription factors, interact with the genes more directly: They bind to landing sites situated near the gene under their control. As with alternative splicing, activation of different combinations of landing sites makes possible exquisite control of gene expression, but researchers have yet to figure out exactly how all these regulatory elements really work or how they fit in with alternative splicing.

    Approximate number of genesCREDIT: JUPITER IMAGES

    In the past decade or so, researchers have also come to appreciate the key roles played by chromatin proteins and RNA in regulating gene expression. Chromatin proteins are essentially the packaging for DNA, holding chromosomes in well-defined spirals. By slightly changing shape, chromatin may expose different genes to the transcription machinery.

    Genes also dance to the tune of RNA. Small RNA molecules, many less than 30 bases, now share the limelight with other gene regulators. Many researchers who once focused on messenger RNA and other relatively large RNA molecules have in the past 5 years turned their attention to these smaller cousins, including microRNA and small nuclear RNA. Surprisingly, RNAs in these various guises shut down and otherwise alter gene expression. They also are key to cell differentiation in developing organisms, but the mechanisms are not fully understood.

    Researchers have made enormous strides in pinpointing these various mechanisms. By matching up genomes from organisms on different branches on the evolutionary tree, genomicists are locating regulatory regions and gaining insights into how mechanisms such as alternative splicing evolved. These studies, in turn, should shed light on how these regions work. Experiments in mice, such as the addition or deletion of regulatory regions and manipulating RNA, and computer models should also help. But the central question is likely to remain unsolved for a long time: How do all these features meld together to make us whole?

  23. To What Extent Are Genetic Variation and Personal Health Linked?

    1. Jennifer Couzin

    Forty years ago, doctors learned why some patients who received the anesthetic succinylcholine awoke normally but remained temporarily paralyzed and unable to breathe: They shared an inherited quirk that slowed their metabolism of the drug. Later, scientists traced sluggish succinylcholine metabolism to a particular gene variant. Roughly 1 in 3500 people carry two deleterious copies, putting them at high risk of this distressing side effect.


    The solution to the succinylcholine mystery was among the first links drawn between genetic variation and an individual's response to drugs. Since then, a small but growing number of differences in drug metabolism have been linked to genetics, helping explain why some patients benefit from a particular drug, some gain nothing, and others suffer toxic side effects.

    The same sort of variation, it is now clear, plays a key role in individual risks of coming down with a variety of diseases. Gene variants have been linked to elevated risks for disorders from Alzheimer's disease to breast cancer, and they may help explain why, for example, some smokers develop lung cancer whereas many others don't.

    These developments have led to hopes—and some hype—that we are on the verge of an era of personalized medicine, one in which genetic tests will determine disease risks and guide prevention strategies and therapies. But digging up the DNA responsible—if in fact DNA is responsible—and converting that knowledge into gene tests that doctors can use remains a formidable challenge.

    Many conditions, including various cancers, heart attacks, lupus, and depression, likely arise when a particular mix of genes collides with something in the environment, such as nicotine or a fatty diet. These multigene interactions are subtler and knottier than the single gene drivers of diseases such as hemophilia and cystic fibrosis; spotting them calls for statistical inspiration and rigorous experiments repeated again and again to guard against introducing unproven gene tests into the clinic. And determining treatment strategies will be no less complex: Last summer, for example, a team of scientists linked 124 different genes to resistance to four leukemia drugs.

    But identifying gene networks like these is only the beginning. One of the toughest tasks is replicating these studies—an especially difficult proposition in diseases that are not overwhelmingly heritable, such as asthma, or ones that affect fairly small patient cohorts, such as certain childhood cancers. Many clinical trials do not routinely collect DNA from volunteers, making it sometimes difficult for scientists to correlate disease or drug response with genes. Gene microarrays, which measure expression of dozens of genes at once, can be fickle and supply inconsistent results. Gene studies can also be prohibitively costly.

    Nonetheless, genetic dissection of some diseases—such as cancer, asthma, and heart disease—is galloping ahead. Progress in other areas, such as psychiatric disorders, is slower. Severely depressed or schizophrenic patients could benefit enormously from tests that reveal which drug and dose will help them the most, but unlike asthma, drug response can be difficult to quantify biologically, making gene-drug relations tougher to pin down.

    As DNA sequence becomes more available and technologies improve, the genetic patterns that govern health will likely come into sharper relief. Genetic tools still under construction, such as a haplotype map that will be used to discern genetic variation behind common diseases, could further accelerate the search for disease genes.

    The next step will be designing DNA tests to guide clinical decision-making—and using them. If history is any guide, integrating such tests into standard practice will take time. In emergencies—a heart attack, an acute cancer, or an asthma attack—such tests will be valuable only if they rapidly deliver results.

    Ultimately, comprehensive personalized medicine will come only if pharmaceutical companies want it to—and it will take enormous investments in research and development. Many companies worry that testing for genetic differences will narrow their market and squelch their profits.

    Still, researchers continue to identify new opportunities. In May, the Icelandic company deCODE Genetics reported that an experimental asthma drug that pharmaceutical giant Bayer had abandoned appeared to decrease the risk of heart attack in more than 170 patients who carried particular gene variants. The drug targets the protein produced by one of those genes. The finding is likely to be just a foretaste of the many surprises in store, as the braids binding DNA, drugs, and disease are slowly unwound.

  24. Can the Laws of Physics Be Unified?

    1. Charles Seife

    At its best, physics eliminates complexity by revealing underlying simplicity. Maxwell's equations, for example, describe all the confusing and diverse phenomena of classical electricity and magnetism by means of four simple rules. These equations are beautiful; they have an eerie symmetry, mirroring one another in an intricate dance of symbols. The four together feel as elegant, as whole, and as complete to a physicist as a Shakespearean sonnet does to a poet.

    The Standard Model of particle physics is an unfinished poem. Most of the pieces are there, and even unfinished, it is arguably the most brilliant opus in the literature of physics. With great precision, it describes all known matter—all the subatomic particles such as quarks and leptons—as well as the forces by which those particles interact with one another. These forces are electromagnetism, which describes how charged objects feel each other's influence: the weak force, which explains how particles can change their identities, and the strong force, which describes how quarks stick together to form protons and other composite particles. But as lovely as the Standard Model's description is, it is in pieces, and some of those pieces—those that describe gravity—are missing. It is a few shards of beauty that hint at something greater, like a few lines of Sappho on a fragment of papyrus.

    The beauty of the Standard Model is in its symmetry; mathematicians describe its symmetries with objects known as Lie groups. And a mere glimpse at the Standard Model's Lie group betrays its fragmented nature: SU(3) × SU(2) × U(1). Each of those pieces represents one type of symmetry, but the symmetry of the whole is broken. Each of the forces behaves in a slightly different way, so each is described with a slightly different symmetry.

    But those differences might be superficial. Electromagnetism and the weak force appear very dissimilar, but in the 1960s physicists showed that at high temperatures, the two forces “unify.” It becomes apparent that electromagnetism and the weak force are really the same thing, just as it becomes obvious that ice and liquid water are the same substance if you warm them up together. This connection led physicists to hope that the strong force could also be unified with the other two forces, yielding one large theory described by a single symmetry such as SU(5).

    A unified theory should have observable consequences. For example, if the strong force truly is the same as the electroweak force, then protons might not be truly stable; once in a long while, they should decay spontaneously. Despite many searches, nobody has spotted a proton decay, nor has anyone sighted any particles predicted by some symmetry-enhancing modifications to the Standard Model, such as supersymmetry. Worse yet, even such a unified theory can't be complete—as long as it ignores gravity.

    Fundamental forces.

    A theory that ties all four forces together is still lacking.


    Gravity is a troublesome force. The theory that describes it, general relativity, assumes that space and time are smooth and continuous, whereas the underlying quantum physics that governs subatomic particles and forces is inherently discontinuous and jumpy. Gravity clashes with quantum theory so badly that nobody has come up with a convincing way to build a single theory that includes all the particles, the strong and electroweak forces, and gravity all in one big bundle. But physicists do have some leads. Perhaps the most promising is superstring theory.

    Superstring theory has a large following because it provides a way to unify everything into one large theory with a single symmetry—SO(32) for one branch of superstring theory, for example—but it requires a universe with 10 or 11 dimensions, scads of undetected particles, and a lot of intellectual baggage that might never be verifiable. It may be that there are dozens of unified theories, only one of which is correct, but scientists may never have the means to determine which. Or it may be that the struggle to unify all the forces and particles is a fool's quest.

    In the meantime, physicists will continue to look for proton decays, as well as search for supersymmetric particles in underground traps and in the Large Hadron Collider (LHC) in Geneva, Switzerland, when it comes online in 2007. Scientists believe that LHC will also reveal the existence of the Higgs boson, a particle intimately related to fundamental symmetries in the model of particle physics. And physicists hope that one day, they will be able to finish the unfinished poem and frame its fearful symmetry.

  25. How Much Can Human Life Span Be Extended?

    1. Jennifer Couzin

    When Jeanne Calment died in a nursing home in southern France in 1997, she was 122 years old, the longest-living human ever documented. But Calment's uncommon status will fade in subsequent decades if the predictions of some biologists and demographers come true. Life-span extension in species from yeast to mice and extrapolation from life expectancy trends in humans have convinced a swath of scientists that humans will routinely coast beyond 100 or 110 years of age. (Today, 1 in 10,000 people in industrialized countries hold centenarian status.) Others say human life span may be far more limited. The elasticity found in other species might not apply to us. Furthermore, testing life-extension treatments in humans may be nearly impossible for practical and ethical reasons.


    Just 2 or 3 decades ago, research on aging was a backwater. But when molecular biologists began hunting for ways to prolong life, they found that life span was remarkably pliable. Reducing the activity of an insulinlike receptor more than doubles the life span of worms to a startling—for them—6 weeks. Put certain strains of mice on near-starvation but nutrient-rich diets, and they live 50% longer than normal.

    Some of these effects may not occur in other species. A worm's ability to enter a “dauer” state, which resembles hibernation, may be critical, for example. And shorter-lived species such as worms and fruit flies, whose aging has been delayed the most, may be more susceptible to life-span manipulation. But successful approaches are converging on a few key areas: calorie restriction; reducing levels of insulinlike growth factor 1 (IGF-1), a protein; and preventing oxidative damage to the body's tissues. All three might be interconnected, but so far that hasn't been confirmed (although calorie-restricted animals have low levels of IGF-1).

    Can these strategies help humans live longer? And how do we determine whether they will? Unlike drugs for cancer or heart disease, the benefits of antiaging treatments are fuzzier, making studies difficult to set up and to interpret. Safety is uncertain; calorie restriction reduces fertility in animals, and lab flies bred to live long can't compete with their wild counterparts. Furthermore, garnering results—particularly from younger volunteers, who may be likeliest to benefit because they've aged the least—will take so long that by the time results are in, those who began the study will be dead.

    That hasn't stopped scientists, some of whom have founded companies, from searching for treatments to slow aging. One intriguing question is whether calorie restriction works in humans. It's being tested in primates, and the National Institute on Aging in Bethesda, Maryland, is funding short-term studies in people. Volunteers in those trials have been on a stringent diet for up to 1 year while researchers monitor their metabolism and other factors that could hint at how they're aging.

    Insights could also come from genetic studies of centenarians, who may have inherited long life from their parents. Many scientists believe that average human life span has an inherent upper limit, although they don't agree on whether it's 85 or 100 or 150.

    One abiding question in the antiaging world is what the goal of all this work ought to be. Overwhelmingly, scientists favor treatments that will slow aging and stave off age-related diseases rather than simply extending life at its most decrepit. But even so, slowing aging could have profound social effects, upsetting actuarial tables and retirement plans.

    Then there's the issue of fairness: If antiaging therapies become available, who will receive them? How much will they cost? Individuals may find they can stretch their life spans. But that may be tougher to achieve for whole populations, although many demographers believe that the average life span will continue to climb as it has consistently for decades. If that happens, much of the increase may come from less dramatic strategies, such as heart disease and cancer prevention, that could also make the end of a long life more bearable.

  26. What Controls Organ Regeneration?

    1. R. John Davenport*
    1. R. John Davenport is an editor of Science's SAGE KE.

    Unlike automobiles, humans get along pretty well for most of their lives with their original parts. But organs do sometimes fail, and we can't go to the mechanic for an engine rebuild or a new water pump—at least not yet. Medicine has battled back many of the acute threats, such as infection, that curtailed human life in past centuries. Now, chronic illnesses and deteriorating organs pose the biggest drain on human health in industrialized nations, and they will only increase in importance as the population ages. Regenerative medicine—rebuilding organs and tissues—could conceivably be the 21st century equivalent of antibiotics in the 20th. Before that can happen, researchers must understand the signals that control regeneration.

    Researchers have puzzled for centuries over how body parts replenish themselves. In the mid-1700s, for instance, Swiss researcher Abraham Trembley noted that when chopped into pieces, hydra—tubelike creatures with tentacles that live in fresh water—could grow back into complete, new organisms. Other scientists of the era examined the salamander's ability to replace a severed tail. And a century later, Thomas Hunt Morgan scrutinized planaria, flatworms that can regenerate even when whittled into 279 bits. But he decided that regeneration was an intractable problem and forsook planaria in favor of fruit flies.

    Mainstream biology has followed in Morgan's wake, focusing on animals suitable for studying genetic and embryonic development. But some researchers have pressed on with studies of regeneration superstars, and they've devised innovative strategies to tackle the genetics of these organisms. These efforts and investigations of new regeneration models—such as zebrafish and special mouse lines—are beginning to reveal the forces that guide regeneration and those that prevent it.

    Animals exploit three principal strategies to regenerate organs. First, working organ cells that normally don't divide can multiply and grow to replenish lost tissue, as occurs in injured salamander hearts. Second, specialized cells can undo their training—a process known as dedifferentiation—and assume a more pliable form that can replicate and later respecialize to reconstruct a missing part. Salamanders and newts take this approach to heal and rebuild a severed limb, as do zebrafish to mend clipped fins. Finally, pools of stem cells can step in to perform required renovations. Planaria tap into this resource when reconstructing themselves.


    Newts reprogram their cells to reconstruct a severed limb.


    Humans already plug into these mechanisms to some degree. For instance, after surgical removal of part of a liver, healing signals tell remaining liver cells to resume growth and division to expand the organ back to its original size. Researchers have found that when properly enticed, some types of specialized human cells can revert to a more nascent state (see p. 85). And stem cells help replenish our blood, skin, and bones. So why do our hearts fill with scar tissue, our lenses cloud, and our brain cells perish?

    Animals such as salamanders and planaria regenerate tissues by rekindling genetic mechanisms that guide the patterning of body structures during embryonic development. We employ similar pathways to shape our parts as embryos, but over the course of evolution, humans may have lost the ability to tap into it as adults, perhaps because the cell division required for regeneration elevated the likelihood of cancer. And we may have evolved the capacity to heal wounds rapidly to repel infection, even though speeding the pace means more scarring. Regeneration pros such as salamanders heal wounds methodically and produce pristine tissue. Avoiding fibrotic tissue could mean the difference between regenerating and not: Mouse nerves grow vigorously if experimentally severed in a way that prevents scarring, but if a scar forms, nerves wither.

    Unraveling the mysteries of regeneration will depend on understanding what separates our wound-healing process from that of animals that are able to regenerate. The difference might be subtle: Researchers have identified one strain of mice that seals up ear holes in weeks, whereas typical strains never do. A relatively modest number of genetic differences seems to underlie the effect. Perhaps altering a handful of genes would be enough to turn us into superhealers, too. But if scientists succeed in initiating the process in humans, new questions will emerge. What keeps regenerating cells from running amok? And what ensures that regenerated parts are the right size and shape, and in the right place and orientation? If researchers can solve these riddles—and it's a big “if”—people might be able to order up replacement parts for themselves, not just their '67 Mustangs.

  27. How Can a Skin Cell Become a Nerve Cell?

    1. Gretchen Vogel

    Like Medieval alchemists who searched for an elixir that could turn base metals into gold, biology's modern alchemists have learned how to use oocytes to turn normal skin cells into valuable stem cells, and even whole animals. Scientists, with practice, have now been able to make nuclear transfer nearly routine to produce cattle, cats, mice, sheep, goats, pigs, and—as a Korean team announced in May—even human embryonic stem (ES) cells. They hope to go still further and turn the stem cells into treatments for previously untreatable diseases. But like the medieval alchemists, today's cloning and stem cell biologists are working largely with processes they don't fully understand: What actually happens inside the oocyte to reprogram the nucleus is still a mystery, and scientists have a lot to learn before they can direct a cell's differentiation as smoothly as nature's program of development does every time fertilized egg gives rise to the multiple cell types that make up a live baby.

    Scientists have been investigating the reprogramming powers of the oocyte for half a century. In 1957, developmental biologists first discovered that they could insert the nucleus of adult frog cells into frog eggs and create dozens of genetically identical tadpoles. But in 50 years, the oocyte has yet to give up its secrets.

    The answers lie deep in cell biology. Somehow, scientists know, the genes that control development—generally turned off in adult cells—get turned back on again by the oocyte, enabling the cell to take on the youthful potential of a newly fertilized egg. Scientists understand relatively little about these on-and-off switches in normal cells, however, let alone the unusual reversal that takes place during nuclear transfer.

    Cellular alchemist.

    A human oocyte.


    As cells differentiate, their DNA becomes more tightly packed, and genes that are no longer needed—or those which should not be expressed—are blocked. The DNA wraps tightly around proteins called histones, and genes are then tagged with methyl groups that prevent the proteinmaking machinery in the cell from reaching them. Several studies have shown that enzymes that remove those methyl groups are crucial for nuclear transfer to work. But they are far from the only things that are needed.

    If scientists could uncover the oocyte's secrets, it might be possible to replicate its tricks without using oocytes themselves, a resource that is fairly difficult to obtain and the use of which raises numerous ethical questions. If scientists could come up with a cell-free bath that turned the clock back on already-differentiated cells, the implications could be enormous. Labs could rejuvenate cells from patients and perhaps then grow them into new tissue that could repair parts worn out by old age or disease.

    But scientists are far from sure if such cell-free alchemy is possible. The egg's very structure, its scaffolding of proteins that guide the chromosomes during cell division, may also play a key role in turning on the necessary genes. If so, developing an elixir of proteins that can turn back a cell's clock may remain elusive.

    To really make use of the oocyte's power, scientists still need to learn how to direct the development of the rejuvenated stem cells and guide them into forming specific tissues. Stem cells, especially those from embryos, spontaneously form dozens of cell types, but controlling that development to produce a single type of cell has proved more difficult. Although some teams have managed to produce nearly pure colonies of certain kinds of neural cells from ES cells, no one has managed to concoct a recipe that will direct the cells to become, say, a pure population of dopamine-producing neurons that could replace those missing in Parkinson's disease.

    Scientists are just beginning to understand how cues interact to guide a cell toward its final destiny. Decades of work in developmental biology have provided a start: Biologists have used mutant frogs, flies, mice, chicks, and fish to identify some of the main genes that control a developing cell's decision to become a bone cell or a muscle cell. But observing what goes wrong when a gene is missing is easier than learning to orchestrate differentiation in a culture dish. Understanding how the roughly 25,000 human genes work together to form tissues—and tweaking the right ones to guide an immature cell's development—will keep researchers occupied for decades. If they succeed, however, the result will be worth far more than its weight in gold.

  28. How Does a Single Somatic Cell Become a Whole Plant?

    1. Gretchen Vogel

    It takes a certain amount of flexibility for a plant to survive and reproduce. It can stretch its roots toward water and its leaves toward sunlight, but it has few options for escaping predators or finding mates. To compensate, many plants have evolved repair mechanisms and reproductive strategies that allow them to produce offspring even without the meeting of sperm and egg. Some can reproduce from outgrowths of stems, roots, and bulbs, but others are even more radical, able to create new embryos from single somatic cells. Most citrus trees, for example, can form embryos from the tissues surrounding the unfertilized gametes—a feat no animal can manage. The house-plant Bryophyllum can sprout embryos from the edges of its leaves, a bit like Athena springing from Zeus's head.

    Nearly 50 years ago, scientists learned that they could coax carrot cells to undergo such embryogenesis in the lab. Since then, people have used so-called somatic embryogenesis to propagate dozens of species, including coffee, magnolias, mangos, and roses. A Canadian company has planted entire forests of fir trees that started life in tissue culture. But like researchers who clone animals (see p. 85), plant scientists understand little about what actually controls the process. The search for answers might shed light on how cells' fates become fixed during development, and how plants manage to retain such flexibility.

    Scientists aren't even sure which cells are capable of embryogenesis. Although earlier work assumed that all plant cells were equally labile, recent evidence suggests that only a subset of cells can transform into embryos. But what those cells look like before their transformation is a mystery. Researchers have videotaped cultures in which embryos develop but found no visual pattern that hints at which cells are about to sprout, and staining for certain patterns of gene expression has been inconclusive.

    Power of one.

    Orange tree embryos can sprout from a single somatic cell.


    Researchers do have a few clues about the molecules that might be involved. In the lab, the herbicide 2,4-dichlorophenoxyacetic acid (sold as weed killer and called 2,4-D) can prompt cells in culture to elongate, build a new cell wall, and start dividing to form embryos. The herbicide is a synthetic analog of the plant hormones called auxins, which control everything from the plant's response to light and gravity to the ripening of fruit. Auxins might also be important in natural somatic embryogenesis: Embryos that sprout on top of veins near the leaf edge are exposed to relatively high levels of auxins. Recent work has also shown that over- or underexpression of certain genes in Arabidopsis plants can prompt embryogenesis in otherwise normal-looking leaf cells.

    Sorting out sex-free embryogenesis might help scientists understand the cellular switches that plants use to stay flexible while still keeping growth under control. Developmental biologists are keen to learn how those mechanisms compare in plants and animals. Indeed, some of the processes that control somatic embryogenesis may be similar to those that occur during animal cloning or limb regeneration (see p. 84).

    On a practical level, scientists would like to be able to use lab-propagation techniques on crop plants such as maize that still require normal pollination. That would speed up both breeding of new varieties and the production of hybrid seedlings—a flexibility that farmers and consumers could both appreciate.

  29. How Does Earth's Interior Work?

    1. Richard A. Kerr

    The plate tectonics revolution went only so deep. True, it made wonderful sense of most of the planet's geology. But that's something like understanding the face of Big Ben; there must be a lot more inside to understand about how and why it all works. In the case of Earth, there's another 6300 kilometers of rock and iron beneath the tectonic plates whose churnings constitute the inner workings of a planetary heat engine. Tectonic plates jostling about the surface are like the hands sweeping across the clock face: informative in many ways but largely mute as to what drives them.

    A long way to go.

    Grasping the workings of plate tectonics will require deeper probing.


    Earth scientists inherited a rather simple picture of Earth's interior from their pre-plate tectonics colleagues. Earth was like an onion. Seismic waves passing through the deep Earth suggested that beneath the broken skin of plates lies a 2800-kilometer layer of rocky mantle overlying 3470 kilometers of molten and—at the center—solid iron. The mantle was further subdivided at a depth of 670 kilometers into upper and lower layers, with a hint of a layer a couple of hundred kilometers thick at the bottom of the lower mantle.

    In the postrevolution era, the onion model continued to loom large. The dominant picture of Earth's inner workings divided the planet at the 670-kilometer depth, forming with the core a three-layer machine. Above the 670, the mantle churned slowly like a very shallow pot of boiling water, delivering heat and rock at mid-ocean ridges to make new crust and cool the interior and accepting cold sinking slabs of old plate at deep-sea trenches. A plume of hot rock might rise from just above the 670 to form a volcanic hot spot like Hawaii. But no hot rock rose up through the 670 barrier, and no cold rock sank down through it. Alternatively, argued a smaller contingent, the mantle churned from bottom to top like a deep stockpot, with plumes rising all the way from the core-mantle boundary.

    Forty years of probing inner Earth with ever more sophisticated seismic imaging has boosted the view of the engine's complexity without much calming the debate about how it works. Imaging now clearly shows that the 670 is no absolute barrier. Slabs penetrate the boundary, although with difficulty. Layered-earth advocates have duly dropped their impenetrable boundary to 1000 kilometers or deeper. Or maybe there's a flexible, semipermeable boundary somewhere that limits mixing to only the most insistent slabs or plumes.

    Now seismic imaging is also outlining two great globs of mantle rock standing beneath Africa and the Pacific like pistons. Researchers disagree whether they are hotter than average and rising under their own buoyancy, denser and sinking, or merely passively being carried upward by adjacent currents. Thin lenses of partially melted rock dot the mantle bottom, perhaps marking the bottom of plumes, or perhaps not. Geochemists reading the entrails of elements and isotopes in mantle-derived rocks find signs of five long-lived “reservoirs” that must have resisted mixing in the mantle for billions of years. But they haven't a clue where in the depths of the mantle those reservoirs might be hiding.

    How can we disassemble the increasingly complex planetary machine and find what makes it tick? With more of the same, plus a large dose of patience. After all, plate tectonics was more than a half-century in the making, and those revolutionaries had to look little deeper than the sea floor.

    Seismic imaging will continue to improve as better seismometers are spread more evenly about the globe. Seismic data are already distinguishing between temperature and compositional effects, painting an even more complex picture of mantle structure. Mineral physicists working in the lab will tease out more properties of rock under deep mantle conditions to inform interpretation of the seismic data, although still handicapped by the uncertain details of mantle composition. And modelers will more faithfully simulate the whole machine, drawing on seismics, mineral physics, and subtle geophysical observations such as gravity variations. Another 40 years should do it.

  30. Are We Alone in the Universe?

    1. Richard A. Kerr

    Alone, in all that space? Not likely. Just do the numbers: Several hundred billion stars in our galaxy, hundreds of billions of galaxies in the observable universe, and 150 planets spied already in the immediate neighborhood of the sun. That should make for plenty of warm, scummy little ponds where life could come together to begin billions of years of evolution toward technology-wielding creatures like ourselves. No, the really big question is when, if ever, we'll have the technological wherewithal to reach out and touch such intelligence. With a bit of luck, it could be in the next 25 years.

    Workers in the search for extraterrestrial intelligence (SETI) would have needed more than a little luck in the first 45 years of the modern hunt for like-minded colleagues out there. Radio astronomer Frank Drake's landmark Project Ozma was certainly a triumph of hope over daunting odds. In 1960, Drake pointed a 26-meter radio telescope dish in Green Bank, West Virginia, at two stars for a few days each. Given the vacuum-tube technology of the time, he could scan across 0.4 megahertz of the microwave spectrum one channel at a time.

    Almost 45 years later, the SETI Institute in Mountain View, California, completed its 10-year-long Project Phoenix. Often using the 350-meter antenna at Arecibo, Puerto Rico, Phoenix researchers searched 710 star systems at 28 million channels simultaneously across an 1800-megahertz range. All in all, the Phoenix search was 100 trillion times more effective than Ozma was.

    Besides stunning advances in search power, the first 45 years of modern SETI have also seen a diversification of search strategies. The Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations (SERENDIP) has scanned billions of radio sources in the Milky Way by piggybacking receivers on antennas in use by observational astronomers, including Arecibo. And other groups are turning modest-sized optical telescopes to searching for nanosecond flashes from alien lasers.

    Listening for E.T.

    The SETI Institute is deploying an array of antennas and tying them into a giant “virtual telescope.”


    Still, nothing has been heard. But then, Phoenix, for example, scanned just one or two nearby sunlike stars out of each 100 million stars out there. For such sparse sampling to work, advanced, broadcasting civilizations would have to be abundant, or searchers would have to get very lucky.

    To find the needle in a galaxy-size haystack, SETI workers are counting on the consistently exponential growth of computing power to continue for another couple of decades. In northern California, the SETI Institute has already begun constructing an array composed of individual 6-meter antennas. Ever-cheaper computer power will eventually tie 350 such antennas into “virtual telescopes,” allowing scientists to search many targets at once. If Moore's law—that the cost of computation halves every 18 months—holds for another 15 years or so, SETI workers plan to use this antenna array approach to check out not a few thousand but perhaps a few million or even tens of millions of stars for alien signals. If there were just 10,000 advanced civilizations in the galaxy, they could well strike pay dirt before Science turns 150.

    The technology may well be available in coming decades, but SETI will also need money. That's no easy task in a field with as high a “giggle factor” as SETI has. The U.S. Congress forced NASA to wash its hands of SETI in 1993 after some congressmen mocked the whole idea of spending federal money to look for “little green men with misshapen heads,” as one of them put it. Searching for another tippy-top branch of the evolutionary tree still isn't part of the NASA vision. For more than a decade, private funding alone has driven SETI. But the SETI Institute's planned $35 million array is only a prototype of the Square Kilometer Array that would put those tens of millions of stars within reach of SETI workers. For that, mainstream radio astronomers will have to be onboard—or we'll be feeling alone in the universe a long time indeed.

  31. How and Where Did Life on Earth Arise?

    1. Carl Zimmer*
    1. Carl Zimmer is the author of Soul Made Flesh: The Discovery of the Brain—and How it Changed the World.

    For the past 50 years, scientists have attacked the question of how life began in a pincer movement. Some approach it from the present, moving backward in time from life today to its simpler ancestors. Others march forward from the formation of Earth 4.55 billion years ago, exploring how lifeless chemicals might have become organized into living matter.

    Working backward, paleontologists have found fossils of microbes dating back at least 3.4 billion years. Chemical analysis of even older rocks suggests that photosynthetic organisms were already well established on Earth by 3.7 billion years ago. Researchers suspect that the organisms that left these traces shared the same basic traits found in all life today. All free-living organisms encode genetic information in DNA and catalyze chemical reactions using proteins. Because DNA and proteins depend so intimately on each other for their survival, it's hard to imagine one of them having evolved first. But it's just as implausible for them to have emerged simultaneously out of a prebiotic soup.

    Experiments now suggest that earlier forms of life could have been based on a third kind of molecule found in today's organisms: RNA. Once considered nothing more than a cellular courier, RNA turns out to be astonishingly versatile, not only encoding genetic information but also acting like a protein. Some RNA molecules switch genes on and off, for example, whereas others bind to proteins and other molecules. Laboratory experiments suggest that RNA could have replicated itself and carried out the other functions required to keep a primitive cell alive.

    Only after life passed through this “RNA world,” many scientists now agree, did it take on a more familiar cast. Proteins are thousands of times more efficient as a catalyst than RNA is, and so once they emerged they would have been favored by natural selection. Likewise, genetic information can be replicated from DNA with far fewer errors than it can from RNA.

    Other scientists have focused their efforts on figuring out how the lifeless chemistry of a prebiotic Earth could have given rise to an RNA world. In 1953, working at the University of Chicago, Stanley Miller and Harold Urey demonstrated that experiments could shed light on this question. They ran an electric current through a mix of ammonia, methane, and other gases believed at the time to have been present on early Earth. They found that they could produce amino acids and other important building blocks of life.

    Cauldron of life?

    Deep-sea vents are one proposed site for life's start.


    Today, many scientists argue that the early atmosphere was dominated by other gases, such as carbon dioxide. But experiments in recent years have shown that under these conditions, many building blocks of life can be formed. In addition, comets and meteorites may have delivered organic compounds from space.

    Just where on Earth these building blocks came together as primitive life forms is a subject of debate. Starting in the 1980s, many scientists argued that life got its start in the scalding, mineral-rich waters streaming out of deep-sea hydrothermal vents. Evidence for a hot start included studies on the tree of life, which suggested that the most primitive species of microbes alive today thrive in hot water. But the hot-start hypothesis has cooled off a bit. Recent studies suggest that heat-loving microbes are not living fossils. Instead, they may have descended from less hardy species and evolved new defenses against heat. Some skeptics also wonder how delicate RNA molecules could have survived in boiling water. No single strong hypothesis has taken the hot start's place, however, although suggestions include tidal pools or oceans covered by glaciers.

    Research projects now under way may shed more light on how life began. Scientists are running experiments in which RNA-based cells may be able to reproduce and evolve. NASA and the European Space Agency have launched probes that will visit comets, narrowing down the possible ingredients that might have been showered on early Earth.

    Most exciting of all is the possibility of finding signs of life on Mars. Recent missions to Mars have provided strong evidence that shallow seas of liquid water once existed on the Red Planet—suggesting that Mars might once have been hospitable to life. Future Mars missions will look for signs of life hiding in under-ground refuges, or fossils of extinct creatures. If life does turn up, the discovery could mean that life arose independently on both planets—suggesting that it is common in the universe—or that it arose on one planet and spread to the other. Perhaps martian microbes were carried to Earth on a meteorite 4 billion years ago, infecting our sterile planet.

  32. What Determines Species Diversity?

    1. Elizabeth Pennisi

    Countless species of plants, animals, and microbes fill every crack and crevice on land and in the sea. They make the world go 'round, converting sunlight to energy that fuels the rest of life, cycling carbon and nitrogen between inorganic and organic forms, and modifying the landscape.

    In some places and some groups, hundreds of species exist, whereas in others, very few have evolved; the tropics, for example, are a complex paradise compared to higher latitudes. Biologists are striving to understand why. The interplay between environment and living organisms and between the organisms themselves play key roles in encouraging or discouraging diversity, as do human disturbances, predator-prey relationships, and other food web connections. But exactly how these and other forces work together to shape diversity is largely a mystery.

    The challenge is daunting. Baseline data are poor, for example: We don't yet know how many plant and animal species there are on Earth, and researchers can't even begin to predict the numbers and kinds of organisms that make up the microbial world. Researchers probing the evolution of, and limits to, diversity also lack a standardized time scale because evolution takes place over periods lasting from days to millions of years. Moreover, there can be almost as much variation within a species as between two closely related ones. Nor is it clear what genetic changes will result in a new species and what their true influence on speciation is.

    Understanding what shapes diversity will require a major interdisciplinary effort, involving paleontological interpretation, field studies, laboratory experimentation, genomic comparisons, and effective statistical analyses. A few exhaustive inventories, such as the United Nations' Millennium Project and an around-the-world assessment of genes from marine microbes, should improve baseline data, but they will barely scratch the surface. Models that predict when one species will split into two will help. And an emerging discipline called evo-devo is probing how genes involved in development contribute to evolution. Together, these efforts will go a long way toward clarifying the history of life.


    Paleontologists have already made headway in tracking the expansion and contraction of the ranges of various organisms over the millennia. They are finding that geographic distribution plays a key role in speciation. Future studies should continue to reveal large-scale patterns of distribution and perhaps shed more light on the origins of mass extinctions and the effects of these catastrophes on the evolution of new species.

    From field studies of plants and animals, researchers have learned that habitat can influence morphology and behavior—particularly sexual selection—in ways that hasten or slow down speciation. Evolutionary biologists have also discovered that speciation can stall out, for example, as separated populations become reconnected, homogenizing genomes that would otherwise diverge. Molecular forces, such as low mutation rates or meiotic drive—in which certain alleles have an increased likelihood of being passed from one generation to the next—influence the rate of speciation.

    And in some cases, differences in diversity can vary within an ecosystem: Edges of ecosystems sometimes support fewer species than the interior.

    Evolutionary biologists are just beginning to sort out how all these factors are intertwined in different ways for different groups of organisms. The task is urgent: Figuring out what shapes diversity could be important for understanding the nature of the wave of extinctions the world is experiencing and for determining strategies to mitigate it.

  33. What Genetic Changes Made Us Uniquely Human?

    1. Elizabeth Culotta

    Every generation of anthropologists sets out to explore what it is that makes us human. Famed paleoanthropologist Louis Leakey thought tools made the man, and so when he uncovered hominid bones near stone tools in Tanzania in the 1960s, he labeled the putative toolmaker Homo habilis, the earliest member of the human genus. But then primatologist Jane Goodall demonstrated that chimps also use tools of a sort, and today researchers debate whether H. habilis truly belongs in Homo. Later studies have honed in on traits such as bipedality, culture, language, humor, and, of course, a big brain as the unique birthright of our species. Yet many of these traits can also be found, at least to some degree, in other creatures: Chimps have rudimentary culture, parrots speak, and some rats seem to giggle when tickled.

    What is beyond doubt is that humans, like every other species, have a unique genome shaped by our evolutionary history. Now, for the first time, scientists can address anthropology's fundamental question at a new level: What are the genetic changes that make us human?

    With the human genome in hand and primate genome data beginning to pour in, we are entering an era in which it may become possible to pinpoint the genetic changes that help separate us from our closest relatives. A rough draft of the chimp sequence has already been released, and a more detailed version is expected soon. The genome of the macaque is nearly complete, the orangutan is under way, and the marmoset was recently approved. All these will help reveal the ancestral genotype at key places on the primate tree.

    The genetic differences revealed between humans and chimps are likely to be profound, despite the oft-repeated statistic that only about 1.2% of our DNA differs from that of chimps. A change in every 100th base could affect thousands of genes, and the percentage difference becomes much larger if you count insertions and deletions. Even if we document all of the perhaps 40 million sequence differences between humans and chimps, what do they mean? Many are probably simply the consequence of 6 million years of genetic drift, with little effect on body or behavior, whereas other small changes—perhaps in regulatory, noncoding sequences—may have dramatic consequences.

    Half of the differences might define a chimp rather than a human. How can we sort them all out?


    One way is to zero in on the genes that have been favored by natural selection in humans. Studies seeking subtle signs of selection in the DNA of humans and other primates have identified dozens of genes, in particular those involved in host-pathogen interactions, reproduction, sensory systems such as olfaction and taste, and more.

    But not all of these genes helped set us apart from our ape cousins originally. Our genomes reveal that we have evolved in response to malaria, but malaria defense didn't make us human. So some researchers have started with clinical mutations that impair key traits, then traced the genes' evolution, an approach that has identified a handful of tantalizing genes. For example, MCPH1 and ASPM cause microcephaly when mutated, FOXP2 causes speech defects, and all three show signs of selection pressure during human, but not chimp, evolution. Thus they may have played roles in the evolution of humans' large brains and speech.

    But even with genes like these, it is often difficult to be completely sure of what they do. Knockout experiments, the classic way to reveal function, can't be done in humans and apes for ethical reasons. Much of the work will therefore demand comparative analyses of the genomes and phenotypes of large numbers of humans and apes. Already, some researchers are pushing for a “great ape 'phenome' project” to match the incoming tide of genomic data with more phenotypic information on apes. Other researchers argue that clues to function can best be gleaned by mining natural human variability, matching mutations in living people to subtle differences in biology and behavior. Both strategies face logistical and ethical problems, but some progress seems likely.

    A complete understanding of uniquely human traits will, however, include more than DNA. Scientists may eventually circle back to those long-debated traits of sophisticated language, culture, and technology, in which nurture as well as nature plays a leading role. We're in the age of the genome, but we can still recognize that it takes much more than genes to make the human.

  34. How Are Memories Stored and Retrieved?

    1. Greg Miller

    Packed into the kilogram or so of neural wetware between the ears is everything we know: a compendium of useful and trivial facts about the world, the history of our lives, plus every skill we've ever learned, from riding a bike to persuading a loved one to take out the trash. Memories make each of us unique, and they give continuity to our lives. Understanding how memories are stored in the brain is an essential step toward understanding ourselves.

    Neuroscientists have already made great strides, identifying key brain regions and potential molecular mechanisms. Still, many important questions remain unanswered, and a chasm gapes between the molecular and whole-brain research.

    The birth of the modern era of memory research is often pegged to the publication, in 1957, of an account of the neurological patient H.M. At age 27, H.M. had large chunks of the temporal lobes of his brain surgically removed in a last-ditch effort to relieve chronic epilepsy. The surgery worked, but it left H.M. unable to remember anything that happened—or anyone he met—after his surgery. The case showed that the medial temporal lobes (MTL), which include the hippocampus, are crucial for making new memories. H.M.'s case also revealed, on closer examination, that memory is not a monolith: Given a tricky mirror drawing task, H.M.'s performance improved steadily over 3 days even though he had no memory of his previous practice. Remembering how is not the same as remembering what, as far as the brain is concerned.

    Thanks to experiments on animals and the advent of human brain imaging, scientists now have a working knowledge of the various kinds of memory as well as which parts of the brain are involved in each. But persistent gaps remain. Although the MTL has indeed proved critical for declarative memory—the recollection of facts and events—the region remains something of a black box. How its various components interact during memory encoding and retrieval is unresolved. Moreover, the MTL is not the final repository of declarative memories. Such memories are apparently filed to the cerebral cortex for long-term storage, but how this happens, and how memories are represented in the cortex, remains unclear.

    More than a century ago, the great Spanish neuro-anatomist Santiago Ramòn y Cajal proposed that making memories must require neurons to strengthen their connections with one another. Dogma at the time held that no new neurons are born in the adult brain, so Ramòn y Cajal made the reasonable assumption that the key changes must occur between existing neurons. Until recently, scientists had few clues about how this might happen.

    Memorable diagram.

    Santiago Ramòn y Cajal's drawing of the hippocampus. He proposed that memories involve strengthened neural connections.

    Since the 1970s, however, work on isolated chunks of nervous-system tissue has identified a host of molecular players in memory formation. Many of the same molecules have been implicated in both declarative and nondeclarative memory and in species as varied as sea slugs, fruit flies, and rodents, suggesting that the molecular machinery for memory has been widely conserved. A key insight from this work has been that short-term memory (lasting minutes) involves chemical modifications that strengthen existing connections, called synapses, between neurons, whereas long-term memory (lasting days or weeks) requires protein synthesis and probably the construction of new synapses.

    Tying this work to the whole-brain research is a major challenge. A potential bridge is a process called long-term potentiation (LTP), a type of synaptic strengthening that has been scrutinized in slices of rodent hippocampus and is widely considered a likely physiological basis for memory. A conclusive demonstration that LTP really does underlie memory formation in vivo would be a big breakthrough.

    Meanwhile, more questions keep popping up. Recent studies have found that patterns of neural activity seen when an animal is learning a new task are replayed later during sleep. Could this play a role in solidifying memories? Other work shows that our memories are not as trustworthy as we generally assume. Why is memory so labile? A hint may come from recent studies that revive the controversial notion that memories are briefly vulnerable to manipulation each time they're recalled. Finally, the no-new-neurons dogma went down in flames in the 1990s, with the demonstration that the hippocampus, of all places, is a virtual neuron nursery throughout life. The extent to which these newborn cells support learning and memory remains to be seen.

  35. How Did Cooperative Behavior Evolve?

    1. Elizabeth Pennisi

    When Charles Darwin was working out his grand theory on the origin of species, he was perplexed by the fact that animals from ants to people form social groups in which most individuals work for the common good. This seemed to run counter to his proposal that individual fitness was key to surviving over the long term.

    By the time he wrote The Descent of Man, however, he had come up with a few explanations. He suggested that natural selection could encourage altruistic behavior among kin so as to improve the reproductive potential of the “family.” He also introduced the idea of reciprocity: that unrelated but familiar individuals would help each other out if both were altruistic. A century of work with dozens of social species has borne out his ideas to some degree, but the details of how and why cooperation evolved remain to be worked out. The answers could help explain human behaviors that seem to make little sense from a strict evolutionary perspective, such as risking one's life to save a drowning stranger.

    Animals help each other out in many ways. In social species from honeybees to naked mole rats, kinship fosters cooperation: Females forgo reproduction and instead help the dominant female with her young. And common agendas help unrelated individuals work together. Male chimpanzees, for example, gang up against predators, protecting each other at a potential cost to themselves.

    Generosity is pervasive among humans. Indeed, some anthropologists argue that the evolution of the tendency to trust one's relatives and neighbors helped humans become Earth's dominant vertebrate: The ability to work together provided our early ancestors more food, better protection, and better childcare, which in turn improved reproductive success.

    However, the degree of cooperation varies. “Cheaters” can gain a leg up on the rest of humankind, at least in the short term. But cooperation prevails among many species, suggesting that this behavior is a better survival strategy, over the long run, despite all the strife among ethnic, political, religious, even family groups now rampant within our species.


    Evolutionary biologists and animal behavior researchers are searching out the genetic basis and molecular drivers of cooperative behaviors, as well as the physiological, environmental, and behavioral impetus for sociality. Neuroscientists studying mammals from voles to hyenas are discovering key correlations between brain chemicals and social strategies.

    Others with a more mathematical bent are applying evolutionary game theory, a modeling approach developed for economics, to quantify cooperation and predict behavioral outcomes under different circumstances. Game theory has helped reveal a seemingly innate desire for fairness: Game players will spend time and energy to punish unfair actions, even though there's nothing to be gained by these actions for themselves. Similar studies have shown that even when two people meet just once, they tend to be fair to each other. Those actions are hard to explain, as they don't seem to follow the basic tenet that cooperation is really based on self-interest.

    The models developed through these games are still imperfect. They do not adequately consider, for example, the effect of emotions on cooperation. Nonetheless, with game theory's increasing sophistication, researchers hope to gain a clearer sense of the rules that govern complex societies.

    Together, these efforts are helping social scientists and others build on Darwin's observations about cooperation. As Darwin predicted, reciprocity is a powerful fitness tactic. But it is not a pervasive one.

    Modern researchers have discovered that a good memory is a prerequisite: It seems reciprocity is practiced only by organisms that can keep track of those who are helpful and those who are not. Humans have a great memory for faces and thus can maintain lifelong good—or hard—feelings toward people they don't see for years. Most other species exhibit reciprocity only over very short time scales, if at all.

    Limited to his personal observations, Darwin was able to come up with only general rationales for cooperative behavior. Now, with new insights from game theory and other promising experimental approaches, biologists are refining Darwin's ideas and, bit by bit, hope that one day they will understand just what it takes to bring out our cooperative spirit.

  36. How Will Big Pictures Emerge From a Sea of Biological Data?

    1. Elizabeth Pennisi

    Biology is rich in descriptive data—and getting richer all the time. Large-scale methods of probing samples, such as DNA sequencing, microarrays, and automated gene-function studies, are filling new databases to the brim. Many subfields from biomechanics to ecology have gone digital, and as a result, observations are more precise and more plentiful. A central question now confronting virtually all fields of biology is whether scientists can deduce from this torrent of molecular data how systems and whole organisms work. All this information needs to be sifted, organized, compiled, and—most importantly—connected in a way that enables researchers to make predictions based on general principles.

    Enter systems biology. Loosely defined and still struggling to find its way, this newly emerging approach aims to connect the dots that have emerged from decades of molecular, cellular, organismal, and even environmental observations. Its proponents seek to make biology more quantitative by relying on mathematics, engineering, and computer science to build a more rigid framework for linking disparate findings. They argue that it is the only way the field can move forward. And they suggest that biomedicine, particularly deciphering risk factors for disease, will benefit greatly.

    The field got a big boost from the completion of the human genome sequence. The product of a massive, trip-to-the-moon logistical effort, the sequence is now a hard and fast fact. The biochemistry of human inheritance has been defined and measured. And that has inspired researchers to try to make other aspects of life equally knowable.

    Molecular geneticists dream of having a similarly comprehensive view of networks that control genes: For example, they would like to identify rules explaining how a single DNA sequence can express different proteins, or varying amounts of protein, in different circumstances (see p. 80). Cell biologists would like to reduce the complex communication patterns traced by molecules that regulate the health of the cell to a set of signaling rules. Developmental biologists would like a comprehensive picture of how the embryo manages to direct a handful of cells into a myriad of specialized functions in bone, blood, and skin tissue. These hard puzzles can only be solved by systems biology, proponents say. The same can be said for neuroscientists trying to work out the emergent properties—higher thought, for example—hidden in complex brain circuits. To understand ecosystem changes, including global warming, ecologists need ways to incorporate physical as well as biological data into their thinking.

    Systems approach.

    Circuit diagrams help clarify nerve cell functions.


    Today, systems biologists have only begun to tackle relatively simple networks. They have worked out the metabolic pathway in yeast for breaking down galactose, a carbohydrate. Others have tracked the first few hours of the embryonic development of sea urchins and other organisms with the goal of seeing how various transcription factors alter gene expression over time. Researchers are also developing rudimentary models of signaling networks in cells and simple brain circuits.

    Progress is limited by the difficulty of translating biological patterns into computer models. Network computer programs themselves are relatively simple, and the methods of portraying the results in ways that researchers can understand and interpret need improving. New institutions around the world are gathering interdisciplinary teams of biologists, mathematicians, and computer specialists to help promote systems biology approaches. But it is still in its early days.

    No one yet knows whether intensive interdisciplinary work and improved computational power will enable researchers to create a comprehensive, highly structured picture of how life works.

  37. How Far Can We Push Chemical Self-Assembly?

    1. Robert F. Service

    Most physical scientists nowadays focus on uncovering nature's mysteries; chemists build things. There is no synthetic astronomy or synthetic physics, at least for now. But chemists thrive on finding creative new ways to assemble molecules. For the last 100 years, they have done that mostly by making and breaking the strong covalent bonds that form when atoms share electrons. Using that trick, they have learned to combine as many as 1000 atoms into essentially any molecular configuration they please.

    Impressive as it is, this level of complexity pales in comparison to what nature flaunts all around us. Everything from cells to cedar trees is knit together using a myriad of weaker links between small molecules. These weak interactions, such as hydrogen bonds, van der Waals forces, and π-π interactions, govern the assembly of everything from DNA in its famous double helix to the bonding of H2O molecules in liquid water. More than just riding herd on molecules, such subtle forces make it possible for structures to assemble themselves into an ever more complex hierarchy. Lipids coalesce to form cell membranes. Cells organize to form tissues. Tissues combine to create organisms. Today, chemists can't approach the complexity of what nature makes look routine. Will they ever learn to make complex structures that self-assemble?

    Well, they've made a start. Over the past 3 decades, chemists have made key strides in learning the fundamental rules of noncovalent bonding. Among these rules: Like prefers like. We see this in hydrophobic and hydrophilic interactions that propel lipid molecules in water to corral together to form the two-layer membranes that serve as the coatings surrounding cells. They bunch their oily tails together to avoid any interaction with water and leave their more polar head groups facing out into the liquid. Another rule: Self-assembly is governed by energetically favorable reactions. Leave the right component molecules alone, and they will assemble themselves into complex ordered structures.


    Chemists have learned to take advantage of these and other rules to design selfassembling systems with a modest degree of complexity. Drug-carrying liposomes, made with lipid bilayers resembling those in cells, are used commercially to ferry drugs to cancerous tissues in patients. And selfassembled molecules called rotaxanes, which can act as molecular switches that oscillate back and forth between two stable states, hold promise as switches in future molecular-based computers.

    But the need for increased complexity is growing, driven by the miniaturization of computer circuitry and the rise of nanotechnology. As features on computer chips continue to shrink, the cost of manufacturing these ever-smaller components is skyrocketing. Right now, companies make them by whittling materials down to the desired size. At some point, however, it will become cheaper to design and build them chemically from the bottom up.

    Self-assembly is also the only practical approach for building a wide variety of nanostructures. Making sure the components assemble themselves correctly, however, is not an easy task. Because the forces at work are so small, self-assembling molecules can get trapped in undesirable conformations, making defects all but impossible to avoid. Any new system that relies on self-assembly must be able either to tolerate those defects or repair them. Again, biology offers an example in DNA. When enzymes copy DNA strands during cell division, they invariably make mistakes—occasionally inserting an A when they should have inserted a T, for example. Some of those mistakes get by, but most are caught by DNA-repair enzymes that scan the newly synthesized strands and correct copying errors.

    Strategies like that won't be easy for chemists to emulate. But if they want to make complex, ordered structures from the ground up, they'll have to get used to thinking a bit more like nature.

  38. What Are the Limits of Conventional Computing?

    1. Charles Seife

    At first glance, the ultimate limit of computation seems to be an engineering issue. How much energy can you put in a chip without melting it? How fast can you flip a bit in your silicon memory? How big can you make your computer and still fit it in a room? These questions don't seem terribly profound.

    In fact, computation is more abstract and fundamental than figuring out the best way to build a computer. This realization came in the mid-1930s, when Princeton mathematicians Alonzo Church and Alan Turing showed—roughly speaking—that any calculation involving bits and bytes can be done on an idealized computer known as a Turing machine. By showing that all classical computers are essentially alike, this discovery enabled scientists and mathematicians to ask fundamental questions about computation without getting bogged down in the minutiae of computer architecture.

    For example, theorists can now classify computational problems into broad categories. P problems are those, broadly speaking, that can be solved quickly, such as alphabetizing a list of names. NP problems are much tougher to solve but relatively easy to check once you've reached an answer. An example is the traveling salesman problem, finding the shortest possible route through a series of locations. All known algorithms for getting an answer take lots of computing power, and even relatively small versions might be out of reach of any classical computer.

    Mathematicians have shown that if you could come up with a quick and easy shortcut to solving any one of the hardest type of NP problems, you'd be able to crack them all. In effect, the NP problems would turn into P problems. But it's uncertain whether such a shortcut exists—whether P = NP. Scientists think not, but proving this is one of the great unanswered questions in mathematics.


    In the 1940s, Bell Labs scientist Claude Shannon showed that bits are not just for computers; they are the fundamental units of describing the information that flows from one object to another. There are physical laws that govern how fast a bit can move from place to place, how much information can be transferred back and forth over a given communications channel, and how much energy it takes to erase a bit from memory. All classical information-processing machines are subject to these laws—and because information seems to be rattling back and forth in our brains, do the laws of information mean that our thoughts are reducible to bits and bytes? Are we merely computers? It's an unsettling thought.

    But there is a realm beyond the classical computer: the quantum. The probabilistic nature of quantum theory allows atoms and other quantum objects to store information that's not restricted to only the binary 0 or 1 of information theory, but can also be 0 and 1 at the same time. Physicists around the world are building rudimentary quantum computers that exploit this and other quantum effects to do things that are provably impossible for ordinary computers, such as finding a target record in a database with too few queries. But scientists are still trying to figure out what quantum-mechanical properties make quantum computers so powerful and to engineer quantum computers big enough to do something useful.

    By learning the strange logic of the quantum world and using it to do computing, scientists are delving deep into the laws of the subatomic world. Perhaps something as seemingly mundane as the quest for computing power might lead to a newfound understanding of the quantum realm.

  39. Can We Selectively Shut Off Immune Responses?

    1. Jon Cohen

    In the past few decades, organ transplantation has gone from experimental to routine. In the United States alone, more than 20,000 heart, liver, and kidney transplants are performed every year. But for transplant recipients, one prospect has remained unchanged: a lifetime of taking powerful drugs to suppress the immune system, a treatment that can have serious side effects. Researchers have long sought ways to induce the immune system to tolerate a transplant without blunting the body's entire defenses, but so far, they have had limited success.

    They face formidable challenges. Although immune tolerance can occur—in rare cases, transplant recipients who stop taking immunosuppressants have not rejected their foreign organs—researchers don't have a clear picture of what is happening at the molecular and cellular levels to allow this to happen. Tinkering with the immune system is also a bit like tinkering with a mechanical watch: Fiddle with one part, and you may disrupt the whole mechanism. And there is a big roadblock to testing drugs designed to induce tolerance: It is hard to know if they work unless immunosuppressant drugs are withdrawn, and that would risk rejection of the transplant. But if researchers can figure out how to train the immune system to tolerate transplants, the knowledge could have implications for the treatment of autoimmune diseases, which also result from unwanted immune attack—in these cases on some of the body's own tissues.

    A report in Science 60 years ago fired the starting gun in the race to induce transplant tolerance—a race that has turned into a marathon. Ray Owen of the University of Wisconsin, Madison, reported that fraternal twin cattle sometimes share a placenta and are born with each other's red blood cells, a state referred to as mixed chimerism. The cattle tolerated the foreign cells with no apparent problems.

    A few years later, Peter Medawar and his team at the University of Birmingham, U.K., showed that fraternal twin cattle with mixed chimerism readily accept skin grafts from each other. Medawar did not immediately appreciate the link to Owen's work, but when he saw the connection, he decided to inject fetal mice in utero with tissue from mice of a different strain. In a publication in Nature in 1953, the researchers showed that, after birth, some of these mice tolerated skin grafts from different strains. This influential experiment led many to devote their careers to transplantation and also raised hopes that the work would lead to cures for autoimmune diseases.

    Immunologists, many of them working with mice, have since spelled out several detailed mechanisms behind tolerance. The immune system can, for example, dispatch “regulatory” cells that suppress immune attacks against self. Or the system can force harmful immune cells to commit suicide or to go into a dysfunctional stupor called anergy. Researchers indeed now know fine details about the genes, receptors, and cell-to-cell communications that drive these processes.


    Yet it's one matter to unravel how the immune system works and another to figure out safe ways to manipulate it. Transplant researchers are pursuing three main strategies to induce tolerance. One builds on Medawar's experiments by trying to exploit chimerism. Researchers infuse the patient with the organ donor's bone marrow in hopes that the donor's immune cells will teach the host to tolerate the transplant; donor immune cells that come along with the transplanted organ also, some contend, can teach tolerance. A second strategy uses drugs to train T cells to become anergic or commit suicide when they see the foreign antigens on the transplanted tissue. The third approach turns up production of T regulatory cells, which prevent specific immune cells from copying themselves and can also suppress rejection by secreting biochemicals called cytokines that direct the immune orchestra to change its tune.

    All these strategies face a common problem: It is maddeningly diff icult to judge whether the approach has failed or succeeded because there are no reliable “biomarkers” that indicate whether a person has become tolerant to a transplant. So the only way to assess tolerance is to stop drug treatment, which puts the patient at risk of rejecting the organ. Similarly, ethical concerns often require researchers to test drugs aimed at inducing tolerance in concert with immunosuppressive therapy. This, in turn, can undermine the drugs' effectiveness because they need a fully functioning immune system to do their job.

    If researchers can complete their 50-year quest to induce immune tolerance safely and selectively, the prospects for hundreds of thousands of transplant recipients would be greatly improved, and so, too, might the prospects for controlling autoimmune diseases.

  40. Do Deeper Principles Underlie Quantum Uncertainty and Nonlocality?

    1. Charles Seife

    “Quantum mechanics is very impressive,” Albert Einstein wrote in 1926. “But an inner voice tells me that it is not yet the real thing.” As quantum theory matured over the years, that voice has gotten quieter—but it has not been silenced. There is a relentless murmur of confusion underneath the chorus of praise for quantum theory.

    Quantum theory was born at the very end of the 19th century and soon became one of the pillars of modern physics. It describes, with incredible precision, the bizarre and counterintuitive behavior of the very small: atoms and electrons and other wee beasties of the submicroscopic world. But that success came with the price of discomfort. The equations of quantum mechanics work very well; they just don't seem to make sense.

    No matter how you look at the equations of quantum theory, they allow a tiny object to behave in ways that defy intuition. For example, such an object can be in “superposition”: It can have two mutually exclusive properties at the same time. The mathematics of quantum theory says that an atom, for example, can be on the left side of a box and the right side of the box at the very same instant, as long as the atom is undisturbed and unobserved. But as soon as an observer opens the box and tries to spot where the atom is, the superposition collapses and the atom instantly “chooses” whether to be on the right or the left.

    This idea is almost as unsettling today as it was 80 years ago, when Erwin Schrödinger ridiculed superposition by describing a half living, half-dead cat. That is because quantum theory changes what the meaning of “is” is. In the classical world, an object has a solid reality: Even a cloud of gas is well described by hard little billiard ball-like pieces, each of which has a well-defined position and velocity. Quantum theory seems to undermine that solid reality. Indeed, the famous Uncertainty Principle, which arises directly from the mathematics of quantum theory, says that objects' positions and moment a are smeary and ill defined, and gaining knowledge about one implies losing knowledge about the other.

    The early quantum physicists dealt with this unreality by saying that the “is”—the fundamental objects handled by the equations of quantum theory—were not actually particles that had an extrinsic reality but “probability waves” that merely had the capability of becoming “real” when an observer makes a measurement. This so-called Copenhagen Interpretation makes sense, if you're willing to accept that reality is probability waves and not solid objects. Even so, it still doesn't sufficiently explain another weirdness of quantum theory: nonlocality.


    In 1935, Einstein came up with a scenario that still defies common sense. In his thought experiment, two particles fly away from each other and wind up at opposite ends of the galaxy. But the two particles happen to be “entangled”—linked in a quantum-mechanical sense—so that one particle instantly “feels” what happens to its twin. Measure one, and the other is instantly transformed by that measurement as well; it's as if the twins mystically communicate, instantly, over vast regions of space. This “nonlocality” is a mathematical consequence of quantum theory and has been measured in the lab. The spooky action apparently ignores distance and the flow of time; in theory, particles can be entangled after their entanglement has already been measured.

    On one level, the weirdness of quantum theory isn't a problem at all. The mathematical framework is sound and describes all these bizarre phenomena well. If we humans can't imagine a physical reality that corresponds to our equations, so what? That attitude has been called the “shut up and calculate” interpretation of quantum mechanics. But to others, our difficulties in wrapping our heads around quantum theory hint at greater truths yet to be understood.

    Some physicists in the second group are busy trying to design experiments that can get to the heart of the weirdness of quantum theory. They are slowly testing what causes quantum superpositions to “collapse”—research that may gain insight into the role of measurement in quantum theory as well as into why big objects behave so differently from small ones. Others are looking for ways to test various explanations for the weirdnesses of quantum theory, such as the “many worlds” interpretation, which explains superposition, entanglement, and other quantum phenomena by positing the existence of parallel universes. Through such efforts, scientists might hope to get beyond the discomfort that led Einstein to declare that “[God] does not play dice.”

  41. Is an Effective HIV Vaccine Feasible?

    1. Jon Cohen

    In the 2 decades since researchers identified HIV as the cause of AIDS, more money has been spent on the search for a vaccine against the virus than on any vaccine effort in history. The U.S. National Institutes of Health alone invests nearly $500 million each year, and more than 50 different preparations have entered clinical trials. Yet an effective AIDS vaccine, which potentially could thwart millions of new HIV infections each year, remains a distant dream.

    Although AIDS researchers have turned the virus inside-out and carefully detailed how it destroys the immune system, they have yet to unravel which immune responses can fend off an infection. That means, as one AIDS vaccine researcher famously put it more than a decade ago, the field is “flying without a compass.”

    Some skeptics contend that no vaccine will ever stop HIV. They argue that the virus replicates so quickly and makes so many mistakes during the process that vaccines can't possibly fend off all the types of HIV that exist. HIV also has developed sophisticated mechanisms to dodge immune attack, shrouding its surface protein in sugars to hide vulnerable sites from antibodies and producing proteins that thwart production of other immune warriors. And the skeptics point out that vaccine developers have had little success against pathogens like HIV that routinely outwit the immune system—the malaria parasite, hepatitis C virus, and the tuberculosis bacillus are prime examples.

    Yet AIDS vaccine researchers have solid reasons to believe they can succeed. Monkey experiments have shown that vaccines can protect animals from SIV, a simian relative of HIV. Several studies have identified people who repeatedly expose themselves to HIV but remain uninfected, suggesting that something is stopping the virus. A small percentage of people who do become infected never seem to suffer any harm, and others hold the virus at bay for a decade or more before showing damage to their immune systems. Scientists also have found that some rare antibodies do work powerfully against the virus in test tube experiments.

    At the start, researchers pinned their hopes on vaccines designed to trigger production of antibodies against HIV's surface protein. The approach seemed promising because HIV uses the surface protein to latch onto white blood cells and establish an infection. But vaccines that only contained HIV's surface protein looked lackluster in animal and test tube studies, and then proved worthless in large-scale clinical trials.


    Now, researchers are intensely investigating other approaches. When HIV manages to thwart antibodies and establish an infection, a second line of defense, cellular immunity, specifically targets and eliminates HIV-infected cells. Several vaccines which are now being tested aim to stimulate production of killer cells, the storm troopers of the cellular immune system. But cellular immunity involves other players—such as macrophages, the network of chemical messengers called cytokines, and so-called natural killer cells—that have received scant attention.

    The hunt for an antibody-based vaccine also is going through something of a renaissance, although it's requiring researchers to think backward. Vaccine researchers typically start with antigens—in this case, pieces of HIV—and then evaluate the antibodies they elicit. But now researchers have isolated more than a dozen antibodies from infected people that have blocked HIV infection in test tube experiments. The trick will be to figure out which specific antigens triggered their production.

    It could well be that a successful AIDS vaccine will need to stimulate both the production of antibodies and cellular immunity, a strategy many are attempting to exploit. Perhaps the key will be stimulating immunity at mucosal surfaces, where HIV typically enters. It's even possible that researchers will discover an immune response that no one knows about today. Or perhaps the answer lies in the interplay between the immune system and human genetic variability: Studies have highlighted genes that strongly influence who is most susceptible—and who is most resistant—to HIV infection and disease.

    Wherever the answer lies, the insights could help in the development of vaccines against other diseases that, like HIV, don't easily succumb to immune attack and that kill millions of people. Vaccine developers for these diseases will probably also have to look in unusual places for answers. The maps created by AIDS vaccine researchers currently exploring uncharted immunologic terrain could prove invaluable.

  42. How Hot Will the Greenhouse World Be?

    1. Richard A. Kerr

    Scientists know that the world has warmed lately, and they believe humankind is behind most of that warming. But how far might we push the planet in coming decades and centuries? That depends on just how sensitively the climate system—air, oceans, ice, land, and life—responds to the greenhouse gases we're pumping into the atmosphere. For a quarter-century, expert opinion was vague about climate sensitivity. Experts allowed that climate might be quite touchy, warming sharply when shoved by one climate driver or another, such as the carbon dioxide from fossil fuel burning, volcanic debris, or dimming of the sun. On the other hand, the same experts conceded that climate might be relatively unresponsive, warming only modestly despite a hard push toward the warm side.

    The problem with climate sensitivity is that you can't just go out and directly measure it. Sooner or later a climate model must enter the picture. Every model has its own sensitivity, but each is subject to all the uncertainties inherent in building a hugely simplified facsimile of the real-world climate system. As a result, climate scientists have long quoted the same vague range for sensitivity: A doubling of the greenhouse gas carbon dioxide, which is expected to occur this century, would eventually warm the world between a modest 1.5°C and a whopping 4.5°C. This range—based on just two early climate models—first appeared in 1979 and has been quoted by every major climate assessment since.

    A harbinger?

    Coffins being lined up during the record-breaking 2003 heat wave in Europe.


    Researchers are finally beginning to tighten up the range of possible sensitivities, at least at one end. For one, the sensitivities of the available models (5% to 95% confidence range) are now falling within the canonical range of 1.5°C to 4.5°C; some had gone considerably beyond the high end. And the first try at a new approach—running a single model while varying a number of model parameters such as cloud behavior—has produced a sensitivity range of 2.4°C to 5.4°Cwith a most probable value of 3.2°C.

    Models are only models, however. How much better if nature ran the experiment? Enter paleoclimatologists, who sort out how climate drivers such as greenhouse gases have varied naturally in the distant past and how the climate system of the time responded. Nature, of course, has never run the perfect analog for the coming greenhouse warming. And estimating how much carbon dioxide concentrations fell during the depths of the last ice age or how much sunlight debris from the eruption of Mount Pinatubo in the Philippines blocked will always have lingering uncertainties. But paleoclimate estimates of climate sensitivity generally fall in the canonical range, with a best estimate in the region of 3°C.

    The lower end at least of likely climate sensitivity does seem to be firming up; it's not likely below 1.5°C, say researchers. That would rule out the negligible warmings proposed by some greenhouse contrarians. But climate sensitivity calculations still put a fuzzy boundary on the high end. Studies drawing on the past century's observed climate change plus estimates of natural and anthropogenic climate drivers yield up to 30% probabilities of sensitivities above 4.5°C, ranging as high as 9°C. The latest study that varies model parameters allows sensitivities up to 11°C, with the authors contending that they can't yet say what the chances of such extremes are. Others are pointing to times of extreme warmth in the geologic past that climate models fail to replicate, suggesting that there's a dangerous element to the climate system that the models do not yet contain.

    Climate researchers have their work cut out for them. They must inject a better understanding of clouds and aerosols—the biggest sources of uncertainty—into their modeling. Ten or 15 years ago, scientists said that would take 10 or 15 years; there's no sign of it happening anytime soon. They must increase the fidelity of models, a realistic goal given the continued acceleration of affordable computing power. And they must retrieve more and better records of past climate changes and their drivers. Meanwhile, unless a rapid shift away from fossil fuel use occurs worldwide, a doubling of carbon dioxide—and more—will be inevitable.

  43. What Can Replace Cheap Oil--and When?

    1. Richard A. Kerr,
    2. Robert F. Service

    The road from old to new energy sources can be bumpy, but the transitions have gone pretty smoothly in the past. After millennia of dependence on wood, society added coal and gravitydriven water to the energy mix. Industrialization took off. Oil arrived, and transportation by land and air soared, with hardly a worry about where the next log or lump of coal was coming from, or what the explosive growth in energy production might be doing to the world.

    Times have changed. The price of oil has been climbing, and ice is melting around both poles as the mercury in the global thermometer rises. Whether the next big energy transition will be as smooth as past ones will depend in large part on three sets of questions: When will world oil production peak? How sensitive is Earth's climate to the carbon dioxide we are pouring into the atmosphere by burning fossil fuels? And will alternative energy sources be available at reasonable costs? The answers rest on science and technology, but how society responds will be firmly in the realm of politics.

    There is little disagreement that the world will soon be running short of oil. The debate is over how soon. Global demand for oil has been rising at 1% or 2% each year, and we are now sucking almost 1000 barrels of oil from the ground every second. Pessimists—mostly former oil company geologists—expect oil production to peak very soon. They point to American geologist M. King Hubbert's successful 1956 prediction of the 1970 peak in U.S. production. Using the same method involving records of past production and discoveries, they predict a world oil peak by the end of the decade. Optimists—mostly resource economists—argue that oil production depends more on economics and politics than on how much happens to be in the ground. Technological innovation will intervene, and production will continue to rise, they say. Even so, midcentury is about as far as anyone is willing to push the peak. That's still “soon” considering that the United States, for one, will need to begin replacing oil's 40% contribution to its energy consumption by then. And as concerns about climate change intensify, the transition to nonfossil fuels could become even more urgent (see p. 100).

    If oil supplies do peak soon or climate concerns prompt a major shift away from fossil fuels, plenty of alternative energy supplies are waiting in the wings. The sun bathes Earth's surface with 86,000 trillion watts, or terawatts, of energy at all times, about 6600 times the amount used by all humans on the planet each year. Wind, biomass, and nuclear power are also plentiful. And there is no shortage of opportunities for using energy more efficiently.

    Of course, alternative energy sources have their issues. Nuclear fission supporters have never found a noncontroversial solution for disposing of long-lived radioactive wastes, and concerns over liability and capital costs are scaring utility companies off. Renewable energy sources are diffuse, making it difficult and expensive to corral enough power from them at cheap prices. So far, wind is leading the way with a global installed capacity of more than 40 billion watts, or gigawatts, providing electricity for about 4.5 cents per kilowatt hour.


    That sounds good, but the scale of renewable energy is still very small when compared to fossil fuel use. In the United States, renewables account for just 6% of overall energy production. And, with global energy demand expected to grow from approximately 13 terawatts a year now to somewhere between 30 and 60 terawatts by the middle of this century, use of renewables will have to expand enormously to displace current sources and have a significant impact on the world's future energy needs.

    What needs to happen for that to take place? Using energy more efficiently is likely to be the sine qua non of energy planning—not least to buy time for efficiency improvements in alternative energy. The cost of solar electric power modules has already dropped two orders of magnitude over the last 30 years. And most experts figure the price needs to drop 100-fold again before solar energy systems will be widely adopted. Advances in nanotechnology may help by providing novel semiconductor systems to boost the efficiency of solar energy collectors and perhaps produce chemical fuels directly from sunlight, CO2, and water.

    But whether these will come in time to avoid an energy crunch depends in part on how high a priority we give energy research and development. And it will require a global political consensus on what the science is telling us.

  44. Will Malthus Continue to Be Wrong?

    1. Erik Stokstad

    In 1798, a 32-year-old curate at a small parish church in Albury, England, published a sobering pamphlet entitled An Essay on the Principle of Population. As a grim rebuttal of the utopian philosophers of his day, Thomas Malthus argued that human populations will always tend to grow and, eventually, they will always be checked—either by foresight, such as birth control, or as a result of famine, war, or disease. Those speculations have inspired many a dire warning from environmentalists.

    Since Malthus's time, world population has risen sixfold to more than 6 billion. Yet happily, apocalyptic collapses have mostly been prevented by the advent of cheap energy, the rise of science and technology, and the green revolution. Most demographers predict that by 2100, global population will level off at about 10 billion.

    The urgent question is whether current standards of living can be sustained while improving the plight of those in need. Consumption of resources—not just food but also water, fossil fuels, timber, and other essentials—has grown enormously in the developed world. In addition, humans have compounded the direct threats to those resources in many ways, including by changing climate (see p. 100), polluting land and water, and spreading invasive species.

    How can humans live sustainably on the planet and do so in a way that manages to preserve some biodiversity? Tackling that question involves a broad range of research for natural and social scientists. It's abundantly clear, for example, that humans are degrading many ecosystems and hindering their ability to provide clean water and other “goods and services” (Science, 1 April, p. 41). But exactly how bad is the situation? Researchers need better information on the status and trends of wetlands, forests, and other areas. To set priorities, they'd also like a better understanding of what makes ecosystems more resistant or vulnerable and whether stressed ecosystems, such as marine fisheries, have a threshold at which they won't recover.

    Out of balance.

    Sustaining a growing world population is threatened by inefficient consumption of resources—and by poverty.


    Agronomists face the task of feeding 4 billion more mouths. Yields may be maxing out in the developed world, but much can still be done in the developing world, particularly sub-Saharan Africa, which desperately needs more nitrogen. Although agricultural biotechnology clearly has potential to boost yields and lessen the environmental impact of farming, it has its own risks, and winning over skeptics has proven difficult.

    There's no shortage of work for social scientists either. Perverse subsidies that encourage overuse of resources—tax loopholes for luxury Hummers and other inefficient vehicles, for example—remain a chronic problem. A new area of activity is the attempt to place values on ecosystems' services, so that the price of clear-cut lumber, for instance, covers the loss of a forest's ability to provide clean water. Incorporating those “externalities” into pricing is a daunting challenge that demands much more knowledge of ecosystems. In addition, economic decisions often consider only net present value and discount the future value of resources—soil erosion, slash-and-burn agriculture, and the mining of groundwater for cities and farming are prime examples. All this complicates the process of transforming industries so that they provide jobs, goods, and services while damaging the environment less.

    Researchers must also grapple with the changing demographics of housing and how it will impact human well-being: In the next 35 to 50 years, the number of people living in cities will double. Much of the growth will likely happen in the developing world in cities that currently have 30,000 to 3 million residents. Coping with that huge urban influx will require everything from energy efficient ways to make concrete to simple ways to purify drinking water.

    And in an age of global television and relentless advertising, what will happen to patterns of consumption? The world clearly can't support 10 billion people living like Americans do today. Whether science—both the natural and social sciences—and technology can crank up efficiency and solve the problems we've created is perhaps the most critical question the world faces. Mustering the political will to make hard choices is, however, likely to be an even bigger challenge.