News this Week

Science  13 Oct 2006:
Vol. 314, Issue 5797, pp. 232
  1. GENOMICS

    On Your Mark. Get Set. Sequence!

    1. Elizabeth Pennisi
    Genomics prize.

    An X Prize Foundation press conference last week kicked off a DNA sequencing competition that could usher in personalized medicine.

    CREDIT: ALEX WONG/GETTY IMAGES

    WASHINGTON, D.C.—Leave it to J. Craig Venter to up the ante again when it comes to deciphering the human genome. Eight years ago, as head of Celera Genomics, a private company, Venter got into a DNA sequencing race with the publicly funded Human Genome Project to read our code. Now, with Venter's nudging, a new race has begun.

    Last week, the X Prize Foundation announced it would pay $10 million to the first privately financed group to sequence 100 human genomes in 10 days. The winner will get another $1 million to decode the genomes of 100 additional people selected by the foundation. Physicist Steven Hawking and talk show host Larry King are already on that list. It will also include people nominated by an alliance of patient advocacy groups, foreshadowing the day when an individual's genome sequence may tailor disease treatment or prevention efforts and bringing legal and ethical issues to the forefront.

    “We couldn't wait” to support this contest, says Stewart Blusson, a geologist and president of the Canadian diamond company Archon Minerals, who donated the $10 million. “It's so profound in what it means for mankind and all of life sciences.”

    Francis Collins, director of the National Human Genome Research Institute in Bethesda, Maryland, which has spent more than $380 million in recent years to fund new DNA sequencing technologies, also embraced the competition. Governments “can only do so much,” he says. “We are delighted to see this prize.”

    The X Prize Foundation made headlines 2 years ago when a small aerospace company won its first award by flying a rocket into space and back twice in 10 days, demonstrating the possibility of privately funded space travel. That success intrigued Venter, head of the J. Craig Venter Institute for Genetic Research in Rockville, Maryland, who in 2003 had promised $500,000 to the first team to sequence a human genome for $1000 dollars.

    In 2005, Venter joined the board of the X Prize Foundation, which revamped his challenge. Thus far, the foundation has only attracted three contestants, even though it approached about 10 groups identified in an article in Science as pursuing promising DNA sequencing technologies (17 March, p. 1544).

    454 Life Sciences in Branford, Connecticut, and VisiGen in Houston, Texas, jumped at the chance to sign on. Steve Benner of the Foundation for Applied Molecular Evolution in Gainesville, Florida, has also joined up. But Applied Biosystems, the world's leader in DNA sequencing, and Solexa Inc. in Hayward, California, a DNA sequencing upstart, declined. “It's much more important to get the process right and get the quality right and not put on an artificial [time] constraint,” says Solexa's David Bentley.

    All the DNA sequencing experts polled by Science think a winner won't emerge for at least 5 years. The X Prize Foundation plans to offer contestants two 10-day windows each year to sequence 100 human cell lines that it provides. Those lines' DNA will have already been partially sequenced in order to verify the success of any contestant.

    The rules of the game still need to be established. For example, the foundation is asking the scientific community how to define a finished genome. “I would like to see 99% of the diploid genome, i.e., about 6 billion base pairs of sequence covering both the maternal and paternal components of the chromosomes, and I would like to see 99.9999% accuracy,” says Nobel laureate Hamilton Smith of the Venter Institute, a member of the prize's advisory board.

    Sequencing DNA now boils down to making small DNA fragments, copying them, detecting the bases in order along each fragment, and using a computer to piece the fragments together. Vast improvements in the process will be needed to capture the prize. “We are asking for such speed that [the technology] must be able to read off fresh DNA in a massively parallel way,” says Laurence Kedes, a molecular biologist at the University of Southern California in Los Angeles and co-chair of the prize advisory board.

    Ewan Birney, a bioinformaticist at the Wellcome Trust Sanger Institute in Hinxton, U.K., says boosts in computational power will also be needed to assemble 100 genomes in 10 days. Currently, Birney's “farm” of 1000 computers takes a week to put a new human genome sequence together. “The data set will be closer [in volume] to what high-energy physicists have,” says Birney. Still, he promises, “it's doable.”

    Charles Cantor, chief scientific officer of SEQUENOM Inc. in San Diego, California, predicts only groups already versed in sequencing DNA will have a chance at the prize. Others disagree. “I think it is unlikely” that the winner will come from the genome-sequencing community, says Leroy Hood, who invented the first automated DNA sequencer. And Venter predicts that the chance that someone will come out of the woodwork to scoop up the $10 million is “close to 100%.” The starting gun has sounded.

  2. NUCLEAR PROLIFERATION

    North Korea's Bomb: Boom or Bust?

    1. Richard Stone

    With a muffled explosion deep inside a mountain, North Korea set the world on edge this week with a claim that it had detonated a nuclear bomb. But as Science went to press, researchers poring over seismic signals from the blast were pondering why the detonation appears to have been so small. Some wondered whether the test was a failure—or even an elaborate hoax.

    Either scenario would indicate that North Korea's claimed “nuclear deterrent” is, for now, more like a dirty bomb that would contaminate a wide area with plutonium. “The value of [the] nuclear deterrent just dropped to zero,” Harvard University nonproliferation expert Jeffrey Lewis penned on 9 October on his popular blog, ArmsControlWonk.com. But the jury is still out, and the blast has sent diplomatic shock waves around the world.

    The political fallout has been swift. South Korea's President Roh Moo-hyun on 9 October declared that his government “will find it difficult to stick to its engagement policy towards North Korea.” China has joined Europe, Japan, and the United States in denouncing the test, paving the way for U.N.-mandated sanctions. U.S. officials insist that average North Koreans should not suffer for the sins of their government, so there is no talk of halting food aid or fuel oil shipments. But other interactions, including scientific exchanges, could be put on ice.

    About the only thing known for certain is that at 10:39 a.m. local time on 9 October, a small tremor shook North Korea's North Hamgyong Province. The U.S. Geological Survey measured the event at magnitude 4.2 on the Richter scale, whereas South Korean estimates put it in the range of 3.5 to 3.7. The seismic signature—a sharp and large “P” wave relative to the “S” wave—“argues strongly that the event was an explosion,” says geologist Jeffrey Park, a seismology expert at Yale University.

    But the yield is unclear. That's because analysts have only a sketchy idea of the test site's geology, a critical factor in gauging yield. Assuming a tight coupling between the shock waves and surrounding rock, South Korean and Western analysts peg the blast at the equivalent of several hundred tons of TNT. (For comparison, the “Fat Man” plutonium bomb dropped on Nagasaki in 1945 was 21 kilotons.) Loose rock would dampen the shock waves, suggesting a yield of 1 to 2 kilotons, says Geoffrey Forden, a physicist and weapons expert at the Massachusetts Institute of Technology in Cambridge. That's still considerably less potent than an estimate from Russia's defense minister Sergei Ivanov, who on 9 October asserted that the blast equaled 5 to 15 kilotons of TNT.

    North Korea insists it conducted a successful test. Analysts presume it was a plutonium bomb, as North Korea claims it has separated plutonium from irradiated nuclear reactor fuel and in 2004 showed a U.S. expert what appeared to be milled plutonium metal. Ever since, a debate has raged over whether North Korea has the technical prowess to get a plutonium sphere to implode and fission.

    The test, for now, does not resolve that question. Some experts say it has the hallmarks of a subcritical explosion that failed to achieve a sustained fission reaction, or perhaps only a fraction of the plutonium fissioned. If so, the North Koreans “learned they have a problem with their design, which will be helpful to them,” says Thomas Cochran, nuclear program director at the Natural Resources Defense Council in Washington, D.C. Others have not ruled out the possibility of a faux nuclear test staged with conventional explosives. The only way to know for sure is if surveillance planes or aerial towers in Japan sniff out radioactive fission products in gases venting from the test shaft. If radionuclides are not detected, Park argues, “the chance of a faked test is quite high.”

    Underwhelming?

    A Japanese official silhouetted against seismic waves from the 9 October test.

    CREDIT: KATSUMI KASAHARA/AP PHOTO

    Bomb or no, North Korea is bracing for further sanctions—as are scientists who argue that engaging the reclusive regime will pay dividends. The Bush Administration is considering whether to classify North Korea like Cuba and bar U.S. citizens from spending money there—a de facto travel ban—except in special circumstances. And “you can forget about North Koreans getting visas to come to the United States anytime soon,” predicts Matthew Bunn, a nonproliferation expert at Harvard University's Kennedy School of Government.

    Such a chill would be a big mistake, argues Frederick Carriere, vice president of The Korea Society in New York City, which has helped broker contacts between U.S. and North Korean scientists. Exchanges have proven effective for promoting understanding and reconciliation, he says, and “should be maintained at all costs.” But in the short term, that's unlikely to happen. Park Chan-mo, president of Pohang University of Science and Technology in South Korea, who helped organize a groundbreaking South-North science conference in Pyongyang last April, says the test is likely to scuttle follow-up meetings next month in China on university education and chemistry. “I'm very disappointed,” he says. Whether authentic, dud, or outright fake, North Korea's bomb is sure to contaminate efforts to reach out to its scientists.

  3. NOBEL PRIZE IN ECONOMICS

    Laurels for Theories That Demystified Inflation, Unemployment, and Growth

    1. Yudhijit Bhattacharjee

    An economist who corrected a fundamental misunderstanding about the relation between unemployment and inflation has won the 2006 Nobel Prize in economics. The award completes a U.S. sweep of this year's science Nobels.

    Edmund Phelps, a professor at Columbia University, receives the $1.37 million award for “his analysis of intertemporal tradeoffs in macroeconomic policy.” The citation recognizes two main contributions Phelps made over a 45-year-long career: He showed that economic expansion coupled with inflation will result in only a temporary reduction in unemployment, and he determined the fraction of national income that must be saved in order to enable future generations to enjoy the same level of consumption. Both lines of research have had considerable influence on economic policy in the United States and around the world.

    Realist.

    Phelps tackled problems with broad policy implications for modern economies.

    CREDITS (TOP TO BOTTOM): THE NOBEL FOUNDATION; BRENDAN MCDERMID/REUTERS

    When Phelps entered the field in the early 1960s after getting a Ph.D. from Yale University, economists believed that it was possible to cut unemployment through policies aimed at expanding the economy—such as printing more money—even though such policies would jack up inflation. Phelps realized that this macroeconomic view was divorced from the reality of how households and employers behave financially. When there's more money to go around, wages go up, and employees start spending more—leading to a short-term increase in overall employment—until they realize that prices have also been going up and that they are no better off than they were before. To offset future price rises, employees try to negotiate higher wages and scale back their demand for goods, causing employment to drop back to earlier levels. The theory, which Phelps advanced in the late 1960s, helped to explain the sorry state of the U.S. economy in the next decade when the country suffered high rates of both inflation and unemployment.

    Phelps's work has had a fundamental impact on the way governments shape economic policy, says Guillermo Calvo, a professor at the University of Maryland, College Park, who was Phelps's student at Yale and then his colleague at Columbia. “The reason that the U.S. economy was able to grow rapidly in the '90s without there being high inflation was the result of fiscal and monetary policies informed by his work,” Calvo says. Phelps's other work on the desirable rate of savings in an economy and the role of education and research in economic growth has had a similar influence on both economics research as well as policy, he says.

    In an autobiography posted on the Web, Phelps recounts his early brush with a macroeconomic phenomenon and his natural inclination for research. His family moved from Chicago to a New York suburb in the mid-1930s after his parents lost their jobs in the Great Depression. At the age of 7, Phelps writes, he compiled a census of the cats in his apartment complex. “A few years later, I liked to spend the late afternoons by the main road recording the distribution by state of the license plates of the cars passing by.” Phelps spent his teenage years digesting financial and economic news gleaned from newspapers, which he would then discuss at the dinner table with his father, who had majored in economics in college, and his mother, who had majored in home economics. The interest never faded.

    “He was constantly fishing for ideas, always thinking about scientific issues,” says Calvo, recalling his association with Phelps at Columbia. “If you went to him, he would always give you insights that could be used in your own research.”

  4. NOBEL PRIZE IN CHEMISTRY

    Solo Winner Detailed Path From DNA to RNA

    1. Robert F. Service

    DNA to RNA to proteins. Biology's central dogma—explaining how the secrets carried in the genes are animated—was limned decades ago. But “it doesn't say anything about how it's actually done,” says E. Peter Geiduschek, a molecular biologist at the University of California, San Diego. Through decades of painstaking work, Roger Kornberg, a biochemist and structural biologist at the Stanford University School of Medicine in Palo Alto, California, revealed in atomic detail the first step in this process, how DNA in cells is converted into messenger RNA, a process known as transcription. Last week, that achievement earned Kornberg a rare honor: sole possession of the 2006 Nobel Prize in chemistry.

    In the genes.

    Stanford University structural biologist Roger Kornberg (left) will pick up his Nobel Prize in December, 47 years after his father Arthur (center). At right, pol II (gray and yellow) transcribes DNA (blue and green) into RNA (red).

    CREDITS (TOP TO BOTTOM): JUSTIN SULLIVAN/GETTY IMAGES; ADAPTED FROM A. L. GNATT ET AL., SCIENCE 292 (2001)

    Kornberg's work has been a “terrific contribution,” Geiduschek says. Adds Peter Fraser, who heads the Laboratory of Chromatin and Gene Expression at the Babraham Institute in Cambridge, U.K., “If the secret of life could be likened to a machine, the process of transcription would be a central cog in the machinery that drives all others. Kornberg has given us an extraordinarily detailed view of this machine.”

    The announcement capped a banner week for Stanford as well as Kornberg's own family. On 2 October, Stanford geneticist Andrew Fire shared the physiology or medicine Nobel Prize for his part in revealing that snippets of RNA can inactivate genes (Science, 6 October, p. 34). Kornberg's father Arthur shared the 1959 physiology or medicine prize for helping show how DNA is copied and passed down from mother to daughter cells. The younger Kornberg was 12 years old when he accompanied his father to Stockholm. “I have felt for some time that he richly deserved it,” says the senior Kornberg—an emeritus professor at Stanford—of his son's work. However, he quips, “I'm disappointed it was so long in coming.” The Kornbergs are the sixth parent and child to win the Nobel Prize. Who knows, but the family could be in for further scientific accolades. One of Roger's two brothers, Tom, is a developmental biologist at the University of California, San Francisco. Ken, meanwhile is an architect specializing in part on designing research buildings. Roger Kornberg, who says he was “simply stunned” when he received the news, is slated to collect the Nobel and $1.37 million at a December ceremony in Stockholm.

    When Kornberg began studying transcription in the 1960s, the notion of revealing the process on an atomic scale was “daunting,” he says. By then, researchers had found the process by which the enzyme RNA polymerase transcribes genetic information in bacteria and other simple organisms known as prokaryotes. But it quickly became apparent that transcription was far more complex in eukaryotes, higher organisms that include all plants and animals. To get a handle on this complexity, in the late 1980s, Kornberg's lab purified a eukaryotic transcription complex from yeast that included RNA polymerase II (pol II)—the primary transcription enzyme—and five associated proteins called general transcription factors. To their surprise, this complex didn't respond to other proteins known to activate specific genes. That discovery led them to another key molecular player known as “mediator”—a complex of some 20 proteins that relays signals from the proteins that turn on specific genes to pol II.

    Kornberg wanted to use x-ray crystallography to visualize just how pol II and its partner proteins work. But that required coming up with millions of identical copies of the protein complexes so they could pack together in an ordered crystal, much like the arrangement of oranges on a store shelf. Other groups had found a way to stop pol II in the act of transcribing DNA to RNA. But that produced a mixture of RNAs, some of which turned out to be active whereas others were inactive, and that mixture wouldn't form good crystals. Separating out just one set of RNAs in the transcription machinery took 6 years. Ultimately, Kornberg's lab discovered that a blood-clotting protein called heparin binds to inactive forms of the RNA, leaving the desired ones behind. “Literally within days, we had crystals of the active RNA,” Kornberg says. His team blasted those crystals with a powerful beam of x-rays and carefully mapped out how they bounced off each of the atoms. That allowed the team to construct the first-ever images of pol II in action in exquisite detail (Science, 20 April 2001, p. 411). Since then, Kornberg's team has produced more than a dozen related images that have revealed everything from how pol II selects the right RNA bases to how it recognizes proteins that turn on expression of specific genes.

    Kornberg says one of the lab's central goals today is to produce more detailed x-ray structures of the mediator complex. “We have already got crystals of about one-third of the mediator, [from] which we believe a structure will be delivered soon,” Kornberg says. That result will itself be a prize that biochemists will treasure for years to come.

  5. BIOMEDICINE

    NIH Funds a Dozen 'Homes' for Translational Research

    1. Jocelyn Kaiser

    The National Institutes of Health (NIH) last week unveiled a key piece in Director Elias Zerhouni's plan for speeding basic research findings to the clinic: a consortium of a dozen institutions that will revamp their clinical programs to encourage more translational research.

    Bench to bedside.

    The new consortium is meant to speed basic research findings to patients, such as this participant in a sleep study at the Children's Hospital of Philadelphia, a partner in the University of Pennsylvania's award.

    CREDIT: PHOTO COURTESY OF CHILDREN'S HOSPITAL OF PHILADELPHIA

    The 12 institutions, as diverse as Columbia University and the Mayo Clinic,* received 5-year Clinical and Translational Science Awards (CTSA) totaling $108 million the first year. The program will eventually replace NIH's 50-year-old program of General Clinical Research Centers (GCRCs), which now consists of some 60 facilities with beds for patients participating in clinical studies (Science, 21 October 2005, p. 422).

    To win one of the new awards, institutions agreed to create an institute, center, or department for clinical and translational research. The new entities will combine existing GCRCs with clinical training grants and programs to encourage more translational studies—for example, by exposing Ph.D. students to patient-oriented research and fostering basic clinical research teams. The CTSAs will also provide support staff, such as regulatory experts, for clinical trials. A consortium steering committee will meet regularly to work out common procedures, such as standardized informatics, so that they can share patient data for joint clinical studies. The CTSAs, says Zerhouni, illustrate “where the NIH needs to go to impact health to the greatest extent possible.”

    The CTSA program should make it easier to move basic findings from the lab to patients, says David Kessler, dean of the University of California, San Francisco, School of Medicine, a CTSA winner and a former commissioner of the Food and Drug Administration. “People want to do this. It's just that the barriers have been too high,” he says.

    One proposed condition that NIH has set aside for now is that the “homes” have the ability to appoint faculty and confer tenure, something departments usually do now. “That's going to take a while,” says Anthony Hayward of the NIH National Center for Research Resources, which made the awards. For now, faculty at most CTSAs will retain their primary appointment in an academic department, he says.

    NIH also awarded planning grants to 52 institutions so they can try for a future round of CTSAs.

  6. STEM CELLS

    California Stem-Cell Institute Unveils 10-Year Plan

    1. Constance Holden

    Last week, the California Institute for Regenerative Medicine (CIRM) unveiled a draft of its “strategic plan” for the next 10 years. The 149-page blueprint offers timelines for initiatives from basic research to public outreach and warns that no therapies using human embryonic stem (ES) cells are likely for at least a decade.

    CIRM's more modest goal, according to the plan, is to generate a “clinical proof of principle” that an ES cell therapy is able to “restore function for at least one disease.” Clinical trials for two to four other diseases should be in progress, it says. And the decade should produce 20 to 30 disease-specific cell lines that illuminate genetic illnesses.

    The plan lays out how CIRM will divvy up the $3 billion expected from bond sales authorized by Proposition 71. About $823 million is slated for basic research and $899 million for preclinical R&D. Clinical trials would get $656 million. The plan designates $295 million for training and $273 million for construction and renovation of labs to keep any ES cell work separate from facilities funded by the National Institutes of Health (NIH). That figure is “very modest” considering the scope of the state's research effort, worries Arnold Kriegstein, head of the Stem Cell Institute at the University of California, San Francisco (UCSF). “To encourage investigators, … we have to create environments where there's not even a hint of the possibility they're going to jeopardize their NIH support.”

    So far, the plan has gotten good reviews. The goals are “sensible, well-reasoned, realistic, and achievable,” says stem cell researcher Evan Snyder of the Burnham Institute in San Diego, who lauds it for including training for “the next generation” of scientists. But Robert Lanza, whose company Advanced Cell Technology Inc. recently moved its headquarters to Alameda, says he's a “bit disappointed” that CIRM isn't being “more ambitious.” Still, consumer groups seem reassured. The plan “shows refreshing honesty by acknowledging that it is unlikely to develop stem cell therapy for routine use during the next decade,” said the Foundation for Taxpayer and Consumer Rights in Santa Monica. CIRM's governing board, the Independent Citizens' Oversight Committee, was to review the plan at a 10 October meeting and adopt a final version in December.

    Although lawsuits have delayed the bond sale that the voters approved in November 2004, CIRM is now offering its first research grants, made possible with a $150 million loan authorized by California Governor Arnold Schwarzenegger. Kriegstein says there is hot competition for the first round of 45 grants: UCSF alone may submit as many as 41 applications.

  7. FUSION REACTOR

    ITER's $12 Billion Gamble

    1. Daniel Clery

    With its big political hurdle behind it, the make-or-break project must run a gantlet of technical challenges to see whether fusion can fulfill its promise of almost limitless energy

    Hotter than the sun.

    ITER's interior must endure colossal heat loads and neutron bombardment.

    CREDIT: ITER

    ABINGDON, U.K., AND GARCHING, GERMANY—Several times a year, hunters gather in the forests around Saint Paul lez Durance in southern France to shoot wild boar. Over the coming decade, however, a portion of their hunting ground will be cleared, and the town's cafés will gradually fill up with newcomers from San Diego and Seoul, Moscow and Munich, Naka and New Delhi. Rising out of that forest clearing will be a 20,000-tonne experiment that just might point a way out of the world's looming energy crisis.

    In November, politicians representing more than half the world's population will sign an agreement that fires the starting pistol for the International Thermonuclear Experimental Reactor (ITER). Although first mooted in 1985, ITER has so far existed only on paper. The governments of China, the European Union (E.U.), India, Japan, South Korea, Russia, and the United States are now ready to hand over a $6 billion check for ITER's construction, followed by a similarly sized one for 20 years' operation. Then it is up to an international team of scientists and engineers to show that the thing will work.

    If it does, the rewards could be huge. With the global population due to climb from 6.5 billion to 8.1 billion by 2030 and the economies of China, India, and others hungry for power, many new generating plants will have to be built. The choices are stark: Burn more coal, with the inevitable impact on climate; build new nuclear fission plants and deal with the radioactive waste and risk of terrorism; or try alternative sources such as solar power, although this option remains expensive and lacks efficiency.

    But there is an outside bet: fusion. If it can be built, a fusion power station would emit no greenhouse gases and produce little radioactive waste, it cannot explode in a runaway reaction, and its fuel is found in seawater in virtually limitless quantities. Such a plant, unlike alternative sources, would produce the steady, reliable base-load power that cities need. And the economics are astounding: A 1-gigawatt coal-fired plant burns about 10,000 tonnes of coal per day, whereas a 1-gigawatt fusion plant would need roughly 1 kilogram of deuterium-tritium fuel.

    We're not even close yet, however. Indeed, skeptics joke that “Fusion is the power of the future and always will be.” The sun is a gigantic fusion reactor, but recreating the conditions here on Earth in which atomic nuclei collide with such force that they fuse together has proved fiendishly difficult. A few dozen examples of the currently favored reactor design—a doughnut-shaped vessel known as a tokamak—have been built since the 1950s, but only a handful have managed to get fusion in their plasma. In 1997, the Joint European Torus (JET) in Abingdon, U.K., the biggest existing tokamak, managed to produce 16 megawatts, but that was only 65% of the power used to keep the reaction running.

    CREDIT: (TOP ROW, L TO R) AP PHOTO; AFP/GETTY IMAGES; (2ND ROW, L TO R) THE SCIENCE MUSEUM/SCIENCE & SOCIETY PICTURE LIBRARY; EFDA-JET
    Are we there yet?

    Political as well as technical trials have dogged the footsteps of fusion. This largest of international collaborations will likely hit some more bumps before it is done.

    CREDIT: IPP

    By studying those earlier reactors, plasma physicists have derived scaling laws that predict that a bigger tokamak (ITER is twice the size of JET in linear dimensions) would overcome many of the problems. But ITER is not a prototype power plant; it is an experiment designed to finally decide whether taming the sun's energy to generate electricity is even viable. ITER aims to produce 500 megawatts of power, 10 times the amount needed to keep it running. But a moneymaking energy utility would need several times that amount, and it would have to keep on doing it steadily for years without a break.

    ITER needs to show such performance is at least possible. But it faces many challenges: Scientists and engineers need to find a lining for its inner walls that can withstand the intense heat; they must tame the plasma instabilities that plague existing reactors; and they must find a way to run the reactor in a steady state rather than the short pulses of existing reactors. ITER must do all of this and, for the first time, maintain the plasma temperature with heat from the fusion reaction itself rather than an external source.

    “There's no doubt that it's an experiment. But it's absolutely necessary. We have to build something like ITER,” says Lorne Horton of the Max Planck Institute for Plasma Physics (IPP) in Garching, Germany. Researchers are reasonably confident that ITER can achieve the basic goals laid out in the project's plans, but there is less certainty about what comes after that. “I'm pretty confident ITER will work as advertised, but you can't be 100% sure,” says Christopher Llewellyn-Smith, director of the U.K. Atomic Energy Authority's Culham Laboratory in Abingdon, home of JET. IPP's Hartmut Zohm agrees: “Certainly there's an element of risk. I'm very confident, 90-something percent, that we can produce a plasma dominated by fusion. But I'm much more uncertain that it will make a viable fusion power reactor.” German physicist Norbert Holtkamp says that ITER's goal of generating excess power is clear: “Either it can do it, or it can't. If it fails, the tokamak is out.”

    CREDIT: ITER

    The waiting game

    The ITER project is currently in a state of limbo. Researchers nailed down the design of the reactor in 2001 after a 13-year effort costing about $1 billion. Since then, governments have been in charge. The ITER partners at that time—the E.U., Japan, and Russia (the United States had pulled out in 1999)—began negotiating who would construct which parts of the reactor. By December 2003, China and South Korea had joined the team, the United States had rejoined, a division of labor had been agreed upon, and the list of sites had been whittled down from four to two: Rokkasho in northern Japan and Cadarache, near Saint Paul lez Durance. Politicians gathered in Washington, D.C., to close the deal but failed to decide between the two sites, and the initialing of the agreement was put off (Science, 2 January 2004, p. 22).

    Acrimonious negotiations continued for 18 months. Finally, in June 2005, a deal was struck: Japan agreed to support Cadarache, and in exchange, the E.U. will place some of its contracts with Japanese companies and will share the cost of extra research facilities in Japan (Science, 1 July 2005, p. 28).

    Since then, negotiators have reworked the international agreement ahead of the signing next month. India has also joined, and key appointments have been made. Kaname Ikeda, a Japanese diplomat with experience of nuclear engineering, will be ITER's director general; its principal deputy director general will be Holtkamp, who managed accelerator building on the Spallation Neutron Source at Oak Ridge National Laboratory in Tennessee. Six other deputies—one from each partner apart from Japan—were appointed in July.

    By the end of this year, the ITER organization will employ no more than 200 people. But across the globe, as many as 4000 researchers are already working directly or indirectly on the project, and they're itching to have some input. Fusion science has moved on in the 5 years since ITER's design was completed, and many want to make changes in the light of recent results. This generates a creative tension between the wider fusion community, which would like an adaptable machine to test as many scenarios as possible, and the ITER staff, who want the machine built on time, on budget, and ready for the next step: a power plant prototype.

    “The physics community still wants modifications, all the bells and whistles. We'll always keep asking. It's healthy,” says Horton. Valery Chuyanov, head of ITER's work site in Garching and now the nominee deputy director general for fusion science and technology, counters that physicists “must understand the boundary conditions. We must respect the agreement and keep within the set cost. They can't expect miracles.”

    Holtkamp has heeded calls for a design review and will convene a meeting in December. But, as Chuyanov points out, not everything has to be set in stone now. Contracts for big-ticket items such as the building, the vacuum vessel, and the superconducting magnets must be signed almost immediately, but other systems are years away from procurement. “The design review is not a moment in time but a continuous process,” Chuyanov says.

    Above all, researchers want the ITER design to be flexible. Since it was fired up in 1983, JET has had numerous transformations including the retrofitting of a divertor, a structure at the bottom of the tokamak designed to siphon off waste heat and particles that is now considered essential for a tokamak. Researchers worry that if ITER's design is too fixed and the current best configuration turns out not to work, they will have little room to maneuver. “We need to maximize the flexibility of the machine. It must give enough information to build a first electricity-generating reactor. Society can't afford another intermediate step,” says IPP's Harald Bolt.

    Skin deep

    One area in which researchers would particularly like some wiggle room is the lining of the inner surface of the vacuum vessel. This so-called first wall must be able to withstand huge heat loads: Parts of the divertor will weather as much as 20 megawatts per square meter for up to 20 seconds. In an ideal world, researchers would line their reactor with carbon: It can stand the heat and doesn't erode and pollute the plasma. But tritium in the fuel readily reacts with carbon, and the resulting radioactive hydrocarbons can be hard to shift. Nuclear licensing authorities require that all tritium must be rigorously accounted for because any released in an accident would soon enter the food chain. So tritium retention in the vessel is a major worry.

    Many believe the answer to be tungsten: It has a very high melting point and low erosion. The problem is that if the plasma wobbles and strikes the surface, it would set loose tungsten ions. Because tungsten has a high atomic number, once it is stripped of all its electrons it has a huge positive charge, so even a few ions would severely dilute the plasma. As a compromise, some tokamaks have experimented with beryllium, the metal with the lowest atomic number (4, compared with tungsten's 74). But beryllium has a low melting point, so it cannot be used in areas with a large heat load, it erodes easily, and neutrons can transmute it into hydrogen or helium.

    ITER's designers have opted for a compromise: The first wall will be 80% beryllium with 5% to 7% carbon and about 12% tungsten, both concentrated in the divertor region (see diagram, above). But researchers know that beryllium is just not tough enough for a generating plant and is highly toxic. “I'm convinced that at a late stage we need to convert ITER to full tungsten coverage to learn if this scenario is compatible with a power reactor,” says Bolt, head of materials research at IPP.

    But Kimihiro Ioki, who heads the vacuum vessel and blanket division in the existing ITER organization, warns that changing the 700 square meters of the first wall would be no mean feat. The 421 panels of the main wall (excluding the divertor) together weigh more than 1 tonne, and technicians would have to extract them one by one through a small port using a many-jointed mechanical arm. Ioki estimates it would take at least a year.

    To run a power reactor with an all-tungsten first wall, operators would have to be sure that the plasma will behave and not touch the sides. Today, it's far from clear that researchers will be able to guarantee that. “A tokamak is the worst lab experiment you can do. It's an extremely hostile environment, and there are too many variables. It's a very difficult process to understand,” says plasma physicist Steven Lisgo of the Culham lab. Phenomena at work in a tokamak range in size from micrometers to meters across, operate over time periods ranging from microseconds to years, and interact in complex nonlinear ways. It's the antithesis of a nice, clean controlled experiment, and theorists still struggle to understand everything that is going on.

    Feeling the heat

    In the beginning, during the 1950s, researchers thought it was going to be easy. Theorists made a calculation, based on standard random diffusion of particles, about the transport of energy and particles from the burning center of the plasma to the edge and concluded that a fusion reactor would only need to be a half-meter across. The early machines revealed, however, that transport was actually five orders of magnitude higher because of turbulent fluctuations in the plasma. “That's why ITER has an 8-meter radius,” says Zohm. Ever since those early days, the trend has been toward ever-bigger tokamaks, on the simple understanding that a larger body “cools” more slowly than a small one. And researchers have been wrestling with turbulence. “It's really difficult physics. We're only now starting to understand it,” says Zohm.

    In the center of the sun, the heat and pressure necessary to spark fusion come from the mass of material pressing down on the core. On Earth, we don't have the benefit of all that gravity, and a tokamak is not a strong vise (see sidebar, p. 239); ITER's plasma pressure will reach only about 5 atmospheres in the center. To compensate, the plasma has to be very hot, about 100 million kelvin. Researchers don't yet know which heating method will work best in a power reactor, so ITER will be equipped to try several, including electron beams, ion beams, neutral particle beams, and microwaves.

    But as Aladdin discovered, once you fire up such a genie in a bottle, it's devilishly hard to control. Researchers have found a plethora of instabilities that cause plasma to wobble, bulge, vibrate, and generally misbehave. It's like trying to squeeze a balloon full of water: The more you squeeze, the more it bulges between your fingers. These troublesome behaviors sport exotic names such as Alfvén waves, neoclassical tearing modes, saw-tooth instabilities, and resonant magnetic perturbations. The bind for researchers is that to get the most fusion power out of their tokamak, they must squeeze the balloon full of plasma as much as possible, but more pressure breeds more instabilities, which ultimately doom the fusion.

    Researchers are fighting instabilities in a number of ways. One is to tweak the distribution of the plasma current flowing around the tokamak ring. Looking at a cross section of the ring, if the heating beams are used to give the current a boost in one spot here and another spot there, this can calm instabilities and allow the plasma to reach a higher pressure. “You can make very small changes in the internal current distribution, and instabilities can go away,” says Zohm.

    Another method is to change the shape of the plasma. In early tokamaks, the plasma was usually circular in cross section, but more modern machines have D-shaped plasmas or almost triangular ones. That helps because the magnetic surfaces that you cross as you move outward from the center of the plasma keep changing direction slightly, an effect known as “magnetic shear.” “Shear suppresses [turbulent] eddies, and so transport is less efficient. It keeps energy in,” says Richard Buttery of the Culham lab.

    Black art.

    Researchers hope that by nudging, squeezing, and bombarding the plasma, they can get it to burn hot without instabilities.

    CREDIT: ITER; ILLUSTRATION: P. HUEY/SCIENCE, ADAPTED FROM LABORATOIRE DE PHYSIQUE DES MILIEUX IONISÉS ET APPLICATIONS (J. BOUGDIRA, R. HUGON)

    Researchers also found another way to keep plasma pressure confined when they were trying to solve a different problem: how to siphon off waste particles and heat from the edge of the plasma. When a deuterium nucleus fuses with a tritium nucleus, they produce a fast-moving helium nucleus, or alpha particle, and a speedy neutron. The neutrons are unaffected by the tokamak's magnetic fields, so they zip straight out and bury themselves in the surrounding “blanket” material, where their energy can raise steam to drive an electricity-generating turbine.

    The charged alpha particles are held inside the tokamak and heat up the plasma. But once they've imparted their energy, these alphas become waste and must be removed from the plasma before they quell the fusion. In the late 1980s, researchers decided to try reshaping the magnetic field toward the bottom of the plasma vessel so that some of the outer magnetic surfaces, instead of bending round and up again, actually diverge at the bottom and pass through the vessel wall. The result is that any particles that stray out near the edge of the plasma eventually get swept down to the bottom and dumped into the divertor, a heat-resistant target where particles are cooled and then pumped out of the vessel.

    JET was first fitted with a divertor in 1991. The devices are now considered indispensable because they not only remove waste but also help confine the plasma. Although researchers don't yet understand why, these open, diverging magnetic surfaces create a “transport barrier” inside the bulk of the plasma, near the edge. The pressure increases very steeply across this barrier so that the core of the plasma can be maintained at a significantly higher pressure—a configuration that plasma physicists call H-mode.

    H-mode has been so successful that it is now part of ITER's baseline scenario, but it does have a downside: Running the plasma in H-mode can lead to the mother of all instabilities, known as edge-localized modes (ELMs). These happen because the transport barrier doesn't let out excess energy gradually but bottles it up until it's finally released all at once. “ELMs are not fully understood. They are bursts of power, like earthquakes,” says Jerome Pamela, head of the JET project. ELMs can damage the first wall or send a blast of energy down to the divertor. Few believe that they will be able to banish ELMs altogether, but if they can be made small and regular, they are manageable. “The name of the game is to let the energy out smoothly,” says Buttery.

    Such is the value of H-mode that even at this late stage ITER's designers are considering design changes to cope with ELMs. One scheme investigated at JET involves injecting impurities such as nitrogen into the transport barrier to make it a bit more leaky, but this also degrades H-mode, so it is not popular. Another tactic, tested at IPP with its Asdex Upgrade tokamak, is to regularly fire pellets of frozen deuterium into the barrier. This sparks an ELM every time, keeping them steady and small. This system “will be installed” on ITER, says Chuyanov, “but is it enough? We don't know.”

    A late entrant into the race is a system developed in the United States using the DIII-D tokamak at General Atomics in San Diego, California. Extra magnetic coils added to the tokamak create a sort of chaotic static in the transport barrier, making it leaky enough to avoid large ELMs. “It's much simpler than pellets, more reliable,” says Pamela. The problem is where to put the coils. Ideally, they would be inside the reactor vessel, close to the plasma, but that sort of reconfiguration would be one step too far for ITER's designers. “We're working very actively to find a solution for ITER, but it's impossible to put the coils inside,” says Chuyanov. Researchers at JET are considering fitting them outside their vessel to see whether that might work for ITER.

    Testing, testing.

    Engineers have already built pieces of ITER, such as this slice of vacuum vessel, to test construction.

    CREDIT: ITER

    Even if one of these techniques does tame ELMs, no one knows what will happen when ITER's self-heating regime kicks in. The fast-moving alpha particles created by fusion will have much more energy than the bulk of the particles in the plasma, and these could open up a whole hornets' nest. “This is the first time a plasma has been heated by alphas. It could create new instabilities. Experts don't think it will, but we cannot logically exclude that possibility,” says Llewellyn-Smith. “That's why we need ITER,” adds Zohm. “We can't simulate internal heating. It's the part we know least about.”

    Seeking steady state

    Although there may be surprises along the way and whole new scenarios may have to be developed, few doubt that ITER will reach its goal of generating large amounts of excess power. But power is not much use commercially in bursts a few minutes long followed by a long wait while the reactor is reconfigured. Tokamaks are by their nature pulsed devices. Some of the magnetic field that confines the plasma is provided by plasma particles flowing around the tokamak—a current of some 15 million amps. This current is induced by a rising current in coils in the central hole of the tokamak ring, the coils and plasma acting like the primary and secondary windings of a transformer. But the current in the coils can't keep rising forever, so the length of any fusion run is limited. The French tokamak at Cadarache, Tore Supra, holds the record with 6-minute pulses.

    But pulsed operation would put intolerable stresses on a power plant that must keep working for decades, so researchers are looking for other ways to drive the plasma current. Firing the heating beams in a particular direction will push plasma around the ring, but this will never provide all the necessary current. In the 1980s, theorists predicted another way: If the pressure gradient in the plasma is high enough, particles, which move through the plasma by spiraling around magnetic field lines, will interfere with each other in such a way as to produce a net current around the ring. This “bootstrap” current was demonstrated in the 1990s, and the Asdex Upgrade, for example, has produced as much as 30% to 40% of its current from the bootstrap effect.

    Getting more bootstrap is hard because of the usual problem: It needs higher pressure gradients in the plasma, which mean more instabilities. Nevertheless, once ITER has demonstrated its baseline scenario, researchers will be aiming for an “advanced” scenario in which the induction coils are switched off and 80% to 90% of the plasma current is generated by bootstrap with the remaining push provided by heating beams. “At the very least, we will want long pulses,” says Horton. But researchers don't expect the advanced scenario to be easy. “It will be a real pain to get to this,” says Zohm.

    Ready when you are

    With a total price tag of about $12 billion, ITER is the most expensive experiment in the world apart from the international space station. Some plasma physicists are skeptical that fusion will ever be a power source on Earth and argue that we shouldn't be wasting our money on ITER. After 50 years of research, even fusion's flag-wavers concede that it may still be another half-century until we have a workable fusion power plant, but ITER researchers are undaunted. “By the middle of the century, we'll know how to do it. Then it's up to the world community to decide if they want it,” says Zohm. Soviet fusion pioneer Lev Artsimovich, speaking more than 3 decades ago, had the same message. Asked when fusion power would be available, he answered, “Fusion will be ready when society needs it.” That time may be fast approaching.

  8. FUSION REACTOR

    How to Squeeze a Plasma

    1. Daniel Clery

    After numerous attempts during the 1940s and 1950s to find an arrangement of magnets to confine a plasma—an ionized gas—Soviet physicists Igor Tamm and Andrei Sakharov came up with the tokamak. The name derives from the Russian words for “toroidal chamber in magnetic coils.”

    In a twist.

    The huge superconducting magnets required to contain ITER's plasma are a major engineering challenge.

    CREDIT: EFDA-JET

    The searingly hot plasma is kept in place by the combined effects of two magnetic fields. The first, known as the toroidal field, is generated by vertical magnetic coils ringing the vacuum vessel—in the case of the International Thermonuclear Experimental Reactor (ITER), 18 of them made from a niobiumtin superconductor. These create a field that loops horizontally through the tokamak's “doughnut.”

    The second, poloidal field forms vertical loops. It is generated by the plasma flowing around the torus in a current of 15 million amps. This current is itself created by electromagnetic induction: The plasma current acts as the secondary windings of a transformer, with superconducting coils in the middle of the torus acting as the primary windings. A rising current in the primary coils induces the plasma current to flow around the torus.

    The combined magnetic field carves a slow spiral around the whole of the torus, and plasma particles zip around the ring in tight orbits around the spiraling magnetic field lines. The configuration keeps the particles clear of the walls and maintains a pressure in the plasma that is key to fusion.

    The tokamak is not the only way to confine a plasma. Physicists are actively pursuing other schemes, such as stellarators and reverse-field pinch machines. But the tokamak is the most successful design so far and forms the basis of ITER and, most likely, the commercial power reactors that will come after it.

  9. BRIAN O'NEILL PROFILE

    Trying to Lasso Climate Uncertainty

    1. John Bohannon

    An expert on climate and population looks for a way to help society avoid a “Wile E. Coyote” catastrophe

    LAXENBURG, AUSTRIA—A few weeks ago, Brian O'Neill hunkered down around a table with a dozen other climate scientists in Cape Town, South Africa, to talk about the future of the planet. It was no idle speculation: Whatever they agreed upon—they knew in advance—would have clout. They were hammering out the final draft of a chapter on research methods for the massive “Fourth Assessment” of the Intergovernmental Panel on Climate Change (IPCC). The product of 3 years of consensus-building among several hundred researchers from around the world, the IPCC report is the scientific bedrock on which policymakers will negotiate everything from carbon taxes to long-term greenhouse gas targets.

    But for all its authority, the IPCC exercise left O'Neill with a nagging concern: What were they leaving out? “It's important that we climate scientists speak with a single voice,” he said in an interview back in his office, high up in the attic of a former Habsburg palace outside Vienna. But “the extreme scenarios that tend to fall out of the IPCC process may be exactly the ones we should most worry about,” he says.

    Modelers' home.

    A Habsburg palace near Vienna is inhabited by IIASA scientists.

    CREDIT: B. D. FATH

    O'Neill, a climate scientist at the International Institute for Applied Systems Analysis (IIASA) here, is frustrated to see uncertainties in research used as a reason to delay action. At age 41, he is one of the youngest scientists in the IPCC network trying to reformulate climate-change projections that can cope better with uncertainty by accounting for “future learning.” O'Neill hopes the strategy will make it clear that, even with gaps in understanding, it pays to act now.

    His work is gaining notice. Although an American, O'Neill has scooped up one of the coveted European Young Investigator Awards (EURYI), a $1.5 million grant meant in part to keep Europe's most promising scientists at home.* “He is one of the brightest young scientists out there, and we're all watching to see what he does,” says Simon Levin, an ecologist at Princeton University.

    A winding path

    O'Neill's job is to predict the future, but his own career path has been unpredictable. With 3 years' training in engineering and a degree in journalism, he became passionately involved in the 1980s in efforts to prevent ozone depletion, working for Greenpeace in California. After collecting a Ph.D. in earth-system sciences from New York University, he did research stints at Brown University and the Environmental Defense Fund in New York City.

    In 2002, he moved to IIASA, a center for multidisciplinary research founded in 1972. Here, O'Neill has built up a new program focusing on population and climate change. The treatment of demographics in most climate-change analyses, he says, is “simplistic at best.” With the EURYI money, he's assembled a team of a half-dozen demographers, economists, statisticians, and physical scientists to sharpen the models.

    A long-limbed basketball player who looks like he could be fresh out of graduate school, O'Neill seems to peel away layers of uncertainty as he speaks. His slow-paced answers to questions often begin with a detailed preamble of assumptions, conditions, and footnotes. But as the father of two daughters, he says, “thinking about how the world will be in 50 years is not so abstract for me anymore.”

    At IIASA, his work focuses on building realistic demographic projections, and China has become his main beat. Different predictions of how the country's population will age and urbanize—and how carbon-emission policies will shape Chinese consumption—have an enormous effect on global climate change scenarios. But obtaining accurate demographic data has been difficult. With the help of a Chinese member of his new team, O'Neill has done an analysis revealing that the IPCC assumptions about China's rate of urbanization and energy consumption could be off by a factor of 2.

    Futurist.

    Brian O'Neill and his group think big improvements are needed in estimates of China's role in climate change.

    CREDIT: PROVIDED BY B. O'NEILL

    Learning about learning

    Earlier this year, O'Neill organized a unique meeting at IIASA, bringing together experts from different areas of climate science, economics, and demography to think about how they generate knowledge. One of the most important questions that emerged, says Klaus Keller, a climate scientist at Pennsylvania State University in State College, is how do you avoid “the Wile E. Coyote effect?” The cartoon coyote often doesn't realize he's falling off a cliff until he looks down, too late to turn back. One of the potential cliffs in climate change involves the ocean's conveyer-belt system—known as the meridional overturning circulation (MOC)—which prevents a Siberian chill from spreading across western Europe by carrying warm water north from the equator. Scientists worry that global warming could abruptly change or even shut down the MOC. “These are the kind of climate thresholds that we need to identify,” says Keller.

    Scientists need to know more about the natural variability in MOC behavior, says O'Neill. But they don't even know “how precise your measurements have to be” or how large an area must be studied before uncertainty could be sufficiently reduced to spot “the edge of the cliff.” He argues that the only way to attack such complex uncertainties with limited time and resources is to have scientists from different fields work together, assessing observations over many years to learn which approaches pay off the most. O'Neill and others did exactly this with 2 decades of research on the carbon cycle, finding that some kinds of observations narrowed uncertainty in model parameters far better than others. Such big-picture, multidisciplinary studies are low on the priority scale of funding agencies, but this is exactly what's needed if you want “to learn about the potential of an MOC shutdown,” he says.

    The second big question to emerge from the IIASA sessions is how can we tell if mainstream research is headed in the wrong direction? O'Neill, Michael Oppenheimer, and Mort Webster, climate scientists at Princeton and the Massachusetts Institute of Technology in Cambridge, respectively, use the term “negative learning” to describe cases in which scientific consensus builds around the wrong model. “This is what happened with ozone,” says Oppenheimer. People believed that ozone's key interactions are with other gases, until scientists realized that the critical reactions driving ozone depletion occur on the surfaces of airborne particles. With revised reaction rates, it was suddenly clear that the planet's protective ozone layer was in much bigger trouble than had been thought. Oppenheimer proposes that scientists team up with philosophers and historians to find common signs of negative scientific learning. A search for such red flags could be built into climate science's regular review process. And O'Neill says more funds should be set aside to explore hypotheses outside the mainstream.

    Researchers desperately need a strategy for tackling climate uncertainties, O'Neill says. Michael Schlesinger, a climate scientist at the University of Illinois, Urbana- Champaign, points to another example. Polar ice sheets are melting more rapidly than anticipated, and some observers fear that this could lead to a catastrophic sea-level increase (Science, 24 March, p. 1698). “Things are happening right now with the ice sheets that were not predicted to happen until 2100,” Schlesinger says. “My worry is that we may have passed the window of opportunity where learning is still useful.”

    Whether a catastrophe can be averted using some form of scientific introspection—or learning about learning, as O'Neill calls it—remains unclear. The concept, like O'Neill's career, is still at an early stage of development.

  10. NEUROSCIENCE

    Brain Evolution on the Far Side

    1. Elizabeth Pennisi

    Over evolutionary time, the protein portfolio of the receiving side of the synapse has become more sophisticated—could that be why brains got bigger and smarter?

    Mind the gap. To Londoners, that phrase, which warns subway commuters to be careful stepping off platforms onto trains, has become such a cliché that it's emblazoned on T-shirts and posters. But to Seth Grant, who works at the Wellcome Trust Sanger Institute in Hinxton, just an hour or so north of London, it's an apt summation of his research focus.

    After years of studying the 10- to 50-nanometer gaps between nerve cells called synapses, Grant is convinced that a key to the evolution of the brain lies within these crucial connections. The human brain relies on a quadrillion synapses to connect its circuitry, and Grant has been comparing, in species big and microscopic, the protein milieu of the synapse's far side, the portion that receives another neuron's signals.

    Where the action is.

    Nerve cell connections called synapses (illustration) depend on many proteins, including large complexes (blue, with red), to relay signals.

    CREDIT: SETH GRANT/WELLCOME TRUST SANGER INSTITUTE

    As nerve cells fire, the transmitting neuron quickly releases chemicals called neurotransmitters—the release takes about 200 microseconds in the giant squid—that zip across the synapse to another nerve cell's membrane. That “postsynaptic” membrane is awash with cell surface receptors and signaling molecules standing by to relay incoming signals throughout the cell. And with some 1100 proteins, says Grant, “the most molecularly complex structure known [in the human body] is the postsynaptic side of the synapse.”

    Grant maintains that these proteins hold new clues about the evolution of the brain. He has found major species differences among the protein content of the postsynapse, disparities that could help explain, for example, the improved cognitive capacities of vertebrates. “Maybe synapse protein evolution has been more important than [increases in] brain size,” says Grant.

    His work also suggests that neurobiological research with invertebrates is less relevant to the human brain than researchers have assumed. “The textbook version is that a synapse is the same thing in a human and a slug,” says Svante Pääbo, a molecular geneticist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. “[Grant] shows that that is not likely to be the case.”

    Many evolutionary biologists attribute the unique properties of the human brain to its relatively large size and complex cortex. But Grant thinks that ever-more-intricate molecular interactions within synapses have made possible the circuitry that underlies our ability to think and feel. “There are classes of proteins that arrived at different times [in evolution] and expanded at different rates,” he says. These expansions, Grant adds, preceded the increase in brain size “as though [they were] a prerequisite for brain size.”

    For a first pass at revealing the origins of the synapse, Grant's postdoc Richard Emes, now at University College London, looked into the evolutionary history of 650 proteins, all of which operate at the receiving end of the mouse synapse. He sought out the genes for those postsynaptic proteins in 19 species, including tunicates, mosquitoes, nematodes, fruit flies, fish, frogs, cows, dogs, chimps, humans—and even yeast. Even though yeast lacks a nervous system, it uses about 20% of the same proteins employed by the mouse synapse, Grant reported earlier this month at a meeting* in Hinxton, U.K. Only later in evolution, he suggests, were these 120 or so proteins adopted for use in nervous systems.

    Protein smarts.

    Synaptic proteins, some of which have their origins in yeast (top), increased in number throughout evolution, with mice having more than tunicates (middle), possibly explaining ever more complex nervous systems.

    CREDITS (TOP TO BOTTOM): DENNIS KUNKEL/VISUALS UNLIMITED; P. DEHAL ET AL., SCIENCE 298 (2002); PHOTOS.COM

    The insects and nematode had double the yeast's number of mouse synaptic proteins, Grant's team found. All the vertebrates had the full complement of genes for these proteins. Until these results, no one had considered that such big differences in synaptic proteins might exist among species, says Grant.

    He and his colleagues also examined the species differences in a particularly important set of postsynaptic proteins. This set forms the NRC/MASC complex, a gatekeeper that relays incoming signals and ultimately activates the postsynaptic nerve cell. The “NRC” part has the N-methyl-D-aspartate (NMDA) glutamate receptor, which is important in learning and memory, at its core. The “MASC” part centers on the membrane associated guanylate kinase signaling complex.

    In vertebrates, the NRC/MASC complex can pack in more than 100 proteins, Emes, Grant, and their colleagues have found. These include neurotransmitter receptors, a calcium ion channel, proteins that connect to signaling proteins inside the cell, kinase enzymes that bind to the calcium channel, and proteins that help hold the whole complex in the right configuration. But in invertebrates, fewer proteins are involved in the complex, Grant reported at the meeting.

    This work shows that “the postsynaptic complexes and the [signaling] systems have increased in complexity throughout evolution,” says Berit Kerner, a geneticist at the University of California, Los Angeles. In particular, the researchers found that vertebrate NRC/MASC complexes have more receptors and associated proteins, as well as a greater number of enzymes that help set up the signaling pathways. “There's more tools in the toolbox,” says Grant.

    In addition, his team has discovered a striking difference in the tail of one of the NMDA receptor-associated proteins. In vertebrates, that tail extends through the cell membrane, where it connects to signaling pathways. In invertebrates, the tail is much shorter. As a result, it can't relay messages into the cell the same way, says Grant, adding that the protein's long tail may help explain why the vertebrate synapse is so plastic and can respond in different ways to different incoming signals.

    Such evolutionary changes may have also made possible greater diversity within the brains of vertebrates. Chris Anderson, another Grant postdoc, has unearthed brain region differences in the makeup of postsynaptic proteins within the mouse. He tested extracts from 22 brain areas, including the hippocampus, cortex, and cerebellum, for genes and proteins that are involved with the NRC/MASC complex. He also assessed the expression patterns of the genes and labeled two dozen of those proteins in actual tissue samples.

    The genes for the components of the NRC/MASC complex work in synchrony: When one is active, so are the rest, Anderson reported at the meeting. But to the researchers' surprise, the activity of this collection of genes varied among brain regions. This variation likely enables the different parts of the brain to do their specific jobs, says Grant. Moreover, when the researchers looked at the activity of “ancient” NRC/MASC genes—those also found in yeast and insects—versus the more recent genes found only in vertebrates, they discovered that the recent genes varied most in their expression.

    Grant has further observed that eliminating particular proteins in the mouse NRC/MASC complex alters specific cognitive abilities. For example, when one of the recently evolved signaling genes is disabled in mice, causing the rodents to lack a protein called SAP102/Dlg3, the animals have trouble learning spatial tasks but not visual tasks. Typically, mice forced to swim in a tank can find platforms by keying in on flags on the platforms or, lacking a visual cue, by developing mental maps of landing spots. But Lianne Stanford in Grant's lab finds these mice must swim in ever-wider circles to find unflagged platforms. In contrast, mice lacking the gene for another protein in the complex, PSD95, can't find the platforms at all. These genes “can be important for one aspect of cognition but not another,” Grant explains.

    Other researchers are intrigued by the novelty of Grant's ideas about the evolution of the brain. “They should certainly raise some eyebrows,” says Jonathan Flint, a psychiatric geneticist at Oxford University in the U.K. Kerner cautions that other factors, such as the sheer number of cells, likely help explain differences between the invertebrate and vertebrate brain. And Pääbo speculates that invertebrates could have their own undiscovered set of proteins, not present in mammals, that would make their synapses as complex as those in vertebrates. Nonetheless, Pääbo is impressed, noting that Grant “provides a vision for how to approach a perhaps important, unexplored aspect of the evolution of cognitive complexity.”

    • * “Integrative Approaches to Brain Complexity” took place 28 September to 1 October in Hinxton, U.K.

  11. PUBLIC HEALTH

    Gerberding Defends Her Transformation of CDC

    1. Jocelyn Kaiser,
    2. Jennifer Couzin

    The director denies that a reorganization is weakening the public health agency

    CREDIT: CDC

    Hopes were high 4 years ago when Julie Gerberding, a respected infectious-disease researcher, took the helm of the Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia. Following the post-9/11 anthrax mail attacks the year before, some in Congress had criticized the nation's premier public health agency for an uncoordinated response. Gerberding, who as CDC acting deputy director for science had emerged as a polished spokesperson for the agency during the crisis, resolved to revamp an organization seen as moving too slowly to address health threats.

    She's certainly stirred things up—including some vocal opposition. In 2003 after bringing in management consultants, Gerberding began a reorganization called the Futures Initiative, creating new “coordinating centers” to oversee CDC's existing centers and drawing howls of protest within CDC. The unrest, simmering for years, drew new public attention in September when the Atlanta Journal-Constitution ran a lengthy story quoting disgruntled former and current CDC scientists. The article suggested that the turmoil has contributed to the departures of the heads of six of CDC's eight original main centers, such as James Hughes, director of the National Center for Infectious Diseases, as well as other seasoned scientists. The story also revealed that last December, five former CDC directors sent Gerberding a letter expressing “great concern” about “low morale” and “losses of highly qualified and motivated staff.”

    In press reports, an Internet blog (cdcchatter.net), and conversations with Science, CDC staffers have complained that the Futures Initiative has dragged on too long, sapped their time, and added layers of bureaucracy that impede their independence. “There's been a deterioration in our capacity coincident with the deterioration in morale,” says Stephen Cochi, a senior researcher in CDC's National Immunization Program. “Something has gone terribly wrong.”

    Researchers also worry about how Gerberding's still-developing plan to align CDC's budget to match a set of “health protection goals,” such as increasing older adults' life spans, will affect research priorities. To her credit, says David Sencer, CDC director from 1966 to 1977 and one of the letter signers, Gerberding has increased efforts to communicate with staff in recent weeks and has announced plans to appoint two ombudsmen: “I think she's trying,” Sencer says.

    In an hourlong interview last week with Science, Gerberding defended her plan to transform CDC and said the mood of many in the agency is upbeat. She pointed to her efforts to branch into new scientific areas such as climate change, expand CDC's extramural grants program, and begin outside peer review of intramural programs. She praised as “extraordinarily gifted” the leaders she's recruited, such as Kevin Fenton, who runs CDC's National Center for HIV, STD, and TB Prevention, and Lonnie King, who directs a new center on zoonotic, vector-borne, and enteric diseases. “There's a difference between our performance in the scientific arena and people's discomfort with some of the things that are going on,” Gerberding said, while acknowledging that her radical overhaul of CDC's operations is inevitably prompting “anger” and “grieving.”

    Below are excerpts from Gerberding's remarks.

    Q: Whose idea was the reorganization at CDC? Was it your idea?

    When the secretary [Health and Human Services Secretary Tommy Thompson] asked me to take the role of CDC, he said, basically, “CDC really needs to modernize.” His concern was that times had changed and that CDC was going to have to be a bigger player in the world of preparing for some pretty large-scale health threats.

    CDC did not have any goals for the agency. There was really a scientific environment that I find was very strong but not really a broad look at, is the science that we' re conducting targeting the health problems of today and tomorrow? What's missing?

    We also have to look at the fact that the kind of science that we do is changing. We need to always have gold standard surveillance and epidemiology and the traditional public health sciences, but now we need science in genomics, we need science in climatology with global climate change, we need new science in informatics, and we need new science in health communications. So we have to grow new science at CDC.

    Q: Are there areas of research at CDC that you're cutting back on to accommodate some of the expansion?

    Not at this time. Our budget is very constrained by very strict budget lines that basically dictate, you spend your money for this.

    Q: So you're 3 years into the reorganization. As you know, CDC staff members are saying, “Yeah, we have to change, but let's get it over with.” How long do you think this reorganization should take? Has it taken longer than you expected?

    Absolutely not. Anybody who's gone through a major organizational transformation knows that you measure the timeline in years.

    Q: We hear from people at CDC saying, “I'm still not sure how my job is going to change.” When will they be able to say, “Okay, now I know how my job has changed, and it's not going to really overhaul much more?”

    Many of the hard pieces are done. As of October 1st, the management priorities of the year are number one: stability. The main structural reorganizations are approved and in place. It's really time to say, “Let's take a breath, and let's really think about now how do we make the promise that we had when we started this really come true.”

    Q: Are you concerned that so many senior scientists have left CDC in the last couple of years?

    First of all, it's not an unprecedented rate of departures. We have been tracking the attrition rate of scientists, and there's absolutely no change in the trend whatsoever.

    Q: What about if you just looked at the number of center directors who left from 1996 to 2001, versus from 2001 to 2006?

    I haven't looked over time at the historical attrition of center directors per se. Some of our center directors were no longer in what I would consider to be the most productive phase of their career, and that was something that, you know, is difficult to point out in a public environment. Being a center director is not a life sentence. But we also have some excellent center directors who were recruited to terrific jobs elsewhere, and we were sorry to see them go, believe me. CDC is a good place to recruit from.

    We've had a wonderful influx of new, brilliant people who are leading our centers.

    Q: In the letter from the former CDC directors, they express concern about how many senior people are leaving and about morale. It seems fairly unusual that five directors would send a letter like that. What do you make of this?

    I think [the former directors] weren't conducting a poll of CDC. They were talking to people they respected and they trusted, and they took it very seriously, as I hope I would if I were in their shoes. You know, Dr. [William] Foege [CDC director from 1977 to 1983, a letter signer] was the last person to try to initiate any kind of organizational change at CDC. And when he was going through it, the entire laboratory division of the agency threatened to resign.

    We recognize that a change process for a center as large and as successful as CDC is a very difficult undertaking. When you ask people to be more collaborative, or you're asking people to more formally work together for a common goal, it's a new way of working, and not everyone's comfortable with it.

    Q: What we've heard is that while that [working together] may be a stated goal, it's not really happening. People feel that because there's additional bureaucracy, it's actually harder to work together.

    I think you probably need to talk to more people.

    Q: You've said the news reports reflect symptoms of a “disease” at CDC. What do you mean by this?

    There are a small number at CDC who are intent on continuing to be critical and are not really willing to say, “How can we help?” or “How can we step up to the plate?” In my opinion, the solution to solving organizational problems is to speak up, not necessarily out.

    We're trying to do more to make it safe for people to speak up at every level of the organization, because if we know we've got problems, we can fix them. We're going to try our own blog and really create a system where people can bring their own questions to me anonymously or otherwise, so that we have an informal way of saying, “Gee, how come I can't hire?” or “What is this about performance awards? Let's get the story straight.”

    CREDIT: JAMES GATHANY/CDC

    Q: Another complaint is that scientists felt, especially early on, that they spent a lot of time on these work groups, and yet in the end, it seems like their advice was ignored.

    I completely disagree with that. The people who designed the organizational structure at CDC were scientists [who] came up with three organizational designs, there was lots of conversation about it, and ultimately, you have to pick one. There's no perfect organizational structure in any agency, but we took their advice.

    There's a difference between our performance in the scientific arena and people's discomfort with some of the things that are going on. We are performing with excellence, and I cannot find any evidence of any faltering of CDC's performance in the last 3 years. We are the most credible governmental organization if you believe the Harris opinion poll—and we continue to strive to improve even more.

    Q: When you look back at your tenure as director so far, are there any mistakes you feel you've made, things you'd do differently in the future, going forward?

    I'm kind of a speed-oriented person, and this [the transformation] has taken longer than I wish. But I'm counseled by wise people who have done this kind of thing many times that it always takes years. And I wish that we had been clearer about that expectation at the beginning—that people had been more prepared for the fact that organizational transformation takes a long time, and it's really hard.

    Q: How long are you planning to stay [at CDC]? Until the transformation is complete?

    I have absolutely no plan to leave right now. [And] I don't think that the transformation will be complete for many years. People who are scientists of organizational design say you check at the 10-year point about your success or failure of the enterprise.

  12. PARTICLE PHYSICS

    Tidy Triangle Dashes Hopes for Exotic Undiscovered Particles

    1. Adrian Cho

    Physicists have proved that their explanation of matter-antimatter asymmetry is essentially the whole story—even though many hoped the theory wouldn't add up

    Most of us would rather be right than wrong, but not so particle physicists. After numerous painstaking measurements—including key observations made earlier this year—they've concluded that their explanation of the subtle differences between matter and antimatter is essentially correct. That marks a major victory for the prevailing theory of particles, the so-called standard model. But it also disappoints researchers who had hoped to find something new to puzzle over.

    “On one hand, this is a great triumph; all the pieces of the puzzle do fit together,” says David MacFarlane, a physicist at the Stanford Linear Accelerator Center (SLAC) in Menlo Park, California, and spokesperson for the lab's BaBar experiment. On the other hand, “I would say at least half of us had hoped things would not agree with the predictions of theory.”

    To test the standard model's explanation of matter-antimatter asymmetry, or “CP violation,” physicists performed a dizzying exercise in abstraction. According to theory, the differences can be inscribed in a geometrical construct known as the unitarity triangle. Creating a mosaic of measurements, two teams—the French CKM Fitter group and the Italian UT Fit group—independently confirmed that, to within a small uncertainty, the triangle is in fact a triangle, as they reported this summer.

    That shows that the standard model's explanation of CP violation is more or less the whole story, says Stéphane T'Jampens, a CMK Fitter group member at the Annecyle-Vieux Laboratory for Particle Physics in France. “There is still room for new physics, but not for a dramatic effect,” he says.

    The fact that the triangle closes means, literally, that the standard model adds up. According to the model, the protons and neutrons in atomic nuclei consist of smaller particles called up quarks and down quarks. These fundamental particles have two sets of short-lived cousins: the heavier charm and strange quarks, and the even-more-massive top and bottom quarks.

    One type of quark can transform into another through the “weak interaction,” which is how heavier quarks decay into lighter ones. For example, a bottom quark can turn into an up, a charm, or a top, but not into a down or a strange. The standard model catalogs the transformation rates in a grid of numbers called the CKM matrix. If quarks and antiquarks were mirror images, these would be ordinary numbers, particular sums of which would equal 100%. After all, a charm quark must decay into something.

    However, as physicists discovered in 1964, matter and antimatter are slightly out of kilter. Theorists can account for this if elements of the CKM matrix are complex numbers: numbers that have an ordinary “real” part and additional “imaginary” part that is multiplied by the square root of negative one. With ordinary numbers in the matrix, the mathematics of the standard model remains exactly the same if particles are swapped for antiparticles and vice versa. Put in complex numbers, and that's no longer true, which means matter and antimatter are no longer symmetric. Some combinations of the complex numbers still add up to 100%. Others add up to zero, and these trace triangles in the plane in which real numbers run across one axis and imaginary numbers run up the other.

    In the early 1990s, physicists realized that they might probe the largest triangle by studying particles called B mesons, each of which contains a bottom quark bound to an antiup quark or an antidown quark. (Quarks are never found alone.) In 1999, they completed two “B-factories”: the PEP-II collider at SLAC, which feeds the BaBar particle detector, and the KEK-B collider at the Japanese particle physics laboratory KEK in Tsukuba, which feeds the Belle detector.

    To determine the angle known as β to high precision, researchers at Belle and BaBar compare the rates at which B0 (pronounced B-zero) mesons and the antimatter versions of B0s decay into certain lighter particles. The result defines a narrow blue triangular swath in the plane (see diagram). The teams measure the angles α and γ by studying the decays of B mesons, which define wider gray swaths.

    Point taken.

    Combined measurements show the two shorter sides of the triangle must both end within the yellow oval.

    CREDIT: CKM FITTER GROUP

    Earlier this year, researchers working with the Tevatron collider at Fermi National Accelerator Laboratory in Batavia, Illinois, helped nail down the length of the right side of the triangle by measuring how fast a Bs (pronounced B-sub-s) meson—which contains a bottom quark and an antistrange quark—transforms into the antimatter version of a Bs in a process known as mixing. That measurement shrank a wide yellow doughnut in the plane to a much skinnier orange one.

    These and other swathe and doughnuts overlap in a little region that shows the triangle must come very close to closing. Had the triangle refused to close, that would have suggested that heavy new particles lurk on the high-energy horizon. By quickly popping into and out of existence within a B meson, those particles might have altered the interactions and deformed the triangle.

    The triangle may even suggest that new particles will be harder to find at the next great accelerator, the Large Hadron Collider (LHC) currently under construction near Geneva, Switzerland. “There is a suspicion that the fact we're not seeing anything new [in the triangle] suggests that there can't be too many light particles within the reach of the LHC,” says Thomas Browder, a physicist at the University of Hawaii, Manoa, and co-spokesperson for the Belle collaboration.

    Others are more optimistic about the LHC. And Fabrizio Parodi, a UT Fit member from the University of Genoa in Italy, says that measurements at the B-factories might still show that the triangle is slightly trapezoidal. For now, however, the standard model appears to be on the money. Alas, sometimes a triangle is just a triangle.

Log in to view full text