News this Week

Science  13 Apr 2007:
Vol. 316, Issue 5822, pp. 182

    U.S. Patent Office Casts Doubt on Wisconsin Stem Cell Patents

    1. Constance Holden

    Opponents of the stem cell patents held by the Wisconsin Alumni Research Foundation (WARF) were delighted last week when the government issued a preliminary ruling rejecting the patents. Critics have long argued that they are far too broad, covering technology that was already in use to derive mouse stem cells and laying claims on all primate embryonic stem (ES) cells in the United States regardless of where they may have been derived. At the same time, patent experts caution that it could take years before the matter is resolved.

    The 2 April ruling by the U.S. Patent and Trademark Office (PTO) came in response to a “request” from two public interest groups for a reexamination of three WARF patents awarded in 1998, 2001, and 2006 (Science, 21 July 2006, p. 281). The patents assert rights over not only the methodology for cultivating primate ES cells but also, controversially, the cells themselves (ScienceNOW, 3 April, Those claims affect the activities of anyone in the United States using human ES cells for either research or commercial purposes. The ruling throws into question patents that reportedly have earned WARF $3.5 million in licensing fees over the past 5 years.

    But critics face long odds in battling WARF. Grady Frenchick, a patent lawyer in Madison, Wisconsin, says the PTO initially rejects patents in 90% of reexamination requests, but only 12% of questioned patents are ultimately thrown out. The rest are affirmed in toto or with some modifications. Nonetheless, patent lawyer Cathryn Campbell of San Diego, California, says the WARF decision is more thorough and detailed “than might usually be expected.” She also says each of the three patent rejections was signed by a different examiner, suggesting that the conclusions are widely shared in the PTO.

    “We're not deluding ourselves that this isn't a tough fight,” says WARF Managing Director Carl Gulbrandsen. If the PTO rules against WARF, WARF will go to the PTO's Board of Patent Appeals and Interferences. If that doesn't work, he says, it's on to the Federal Circuit Court of Appeals in Washington, D.C.

    But preliminary as it is, many people are gratified by the patent decision. “Nobody wanted to do anything, but everybody seemed very, very glad that we did,” says John M. Simpson of the Foundation for Taxpayer and Consumer Rights in Santa Monica, California, which brought the request last July.

    Most scientists doing basic stem cell research in academic or government labs are minimally restricted by WARF's current policies, which require them to pay only $500 for a batch of Wisconsin cells. But they object to the red tape involved. “Every possible collaboration … is slowed considerably by having to negotiate the WARF Material Transfer Agreements,” says George Daley of Harvard Medical School in Boston, Massachusetts. He says if the same rules were applied to mouse cells, “our research would grind to a halt.” Martin Pera of the University of Southern California in Los Angeles says that WARF's grip on “basic platform technology critical to the future development” of the field is bound to impede progress.

    Road rage.

    The WARF patents have taken a toll on stem cell researchers.


    For researchers working on commercial applications, WARF makes life much more difficult. WARF spokesperson Andrew Cohn says it's “too complicated” to explain their rates, but Jonathan Auerbach of GlobalStem Inc. in Rockville, Maryland, says he's heard of research licenses costing up to $400,000. Auerbach says many companies have also been put off by WARF's “reach-through” provisions, which call for royalties on any product developed using the cells. He says his company doesn't have to deal with WARF because it has chosen instead to use human embryonal carcinoma cells and human ES cells with abnormal karyotypes that wouldn't be covered by the patents. Mahendra Rao of Invitrogen in Carlsbad, California, says that his company—which is currently negotiating for a WARF license—and others have established outposts outside the United States, where the patents do not apply.

    Robert Lanza of Advanced Cell Technology in Worcester, Massachusetts, says his company's WARF license entails “a six-figure fee, plus an annual maintenance charge.” On top of that, “whenever a researcher asks us for some ES cells—even ES lines we derived ourselves—we are obligated to pay WARF $5000.” Product development is also hobbled. Geron Corp. in Menlo Park, California, has an exclusive license from WARF to develop treatments based on specialized cells grown from the Wisconsin lines, Lanza says, so “we would be sued if we even tried to develop insulin-producing cells to treat diabetes.”

    Some users are hoping that the widespread complaints could lead WARF to soften its policies further even as the patent reexamination grinds on. In January, for example, WARF lifted the requirement that companies must obtain a license to sponsor human ES cell research at universities. It also eased cell-transfer provisions, lifting fees for transfer of cells among academic and nonprofit researchers. Some critics say both decisions were influenced by the pending patent request. WARF denies this, explaining instead that the decisions are part of “evolving policies … always in favor of increasing access.”


    A Mission to Educate the Elite

    1. Richard Stone
    Shock brigade.

    Some 800 North Korean soldiers helped erect Pyongyang University.


    SEOUL—In a dramatic new sign that North Korea is emerging from isolation, the country's first international university has announced plans to open its doors in Pyongyang this fall.

    Pyongyang University of Science and Technology (PUST) will train select North Korean graduate students in a handful of hard-science disciplines, including computer science and engineering. In addition, founders said last week, the campus will anchor a Silicon Valley-like “industrial cluster” intended to generate jobs and revenue.

    One of PUST's central missions is to train future North Korean elite. Another is evangelism. “While the skills to be taught are technical in nature, the spirit underlying this historic venture is unabashedly Christian,” its founding president, Chin Kyung Kim, notes on the university's Web site (

    The nascent university is getting a warm reception from scientists involved in efforts to engage the Hermit Kingdom. “PUST is a great experiment for North-South relations,” says Dae-Hyun Chung, a physicist who retired from Lawrence Livermore National Laboratory and now works with Roots of Peace, a California nonprofit that aims to remove landmines from Korea's demilitarized zone. To Chung, a Christian university is fitting: A century ago, Christianity was so vibrant in northern Korea, he says, that missionaries called Pyongyang “the Jerusalem of the East.”

    The idea for PUST came in a surprise overture from North Korea in 2000, a few months after a landmark North-South summit. A decade earlier, Kim had established China's first foreign university: Yanbian University of Science and Technology, in Yanji, the capital of an autonomous Korean enclave in China's Jilin Province, just over the border from North Korea. In March 2001, the North Korean government authorized Kim and his backer, the nonprofit Northeast Asia Foundation for Education and Culture (NAFEC), headquartered in Seoul, to establish PUST in southern Pyongyang. It also granted NAFEC the right to appoint Kim as PUST president and hire faculty of any nationality, as well as a contract to use the land for 50 years.

    NAFEC broke ground in June 2002 on a 1-million-square-meter plot that had belonged to the People's Army in Pyongyang's Nak Lak district, on the bank of the Taedong River. Construction began in earnest in April 2004. That summer, workers—a few of the 800 young soldiers on loan to the project—unearthed part of a bell tower belonging to a 19th century church dedicated to Robert Jermain Thomas, a Welsh Protestant missionary killed aboard his ship on the Taedong in 1866.

    NAFEC's fundraising faltered, however, and construction halted in fall 2004. The group intensified its Monday evening prayers and broadened its money hunt, getting critical assistance from a U.S. ally: the former president of Rice University, Malcolm Gillis, a well-connected friend of the elder George Bush and one of three co-chairs of a committee overseeing PUST's establishment. “He made a huge difference,” says Chan-Mo Park, president of Pohang University of Science and Technology (POSTECH), another co-chair. South Korea's unification ministry also quietly handed PUST a $1 million grant—more than it has awarded to any other North-South science cooperation project. This helped the school complete its initial $20 million construction push.

    At the outset, PUST will offer master's and Ph.D. programs in areas including computing, electronics, and agricultural engineering, as well as an MBA program. North Korea's education ministry will propose qualified students, from which PUST will handpick the inaugural class of 150. It is now seeking 45 faculty members. Gillis and other supporters are continuing to stump for a targeted $150 million endowment to cover PUST operations, which in the first year will cost $4 million. Undergraduate programs will be added later, officials say. PUST, at full strength, aims to have 250 faculty members, 600 grad students, and 2000 undergrads.

    PUST hopes to establish research links and exchanges with North Korea's top institutions and with universities abroad. “It is a very positive sign,” says Stuart Thorson, a political scientist at Syracuse University in New York who leads a computer science collaboration between Syracuse and Kimchaek University of Technology in Pyongyang. “Key to success will be achieving on-the-ground involvement of international faculty in PUST's teaching and research.”

    Some observers remain cautious, suggesting that the North Korean military could use the project to acquire weapons technology or might simply commandeer the campus after completion. A more probable risk is that trouble in the ongoing nuclear talks could cause delays. At the moment, however, signs are auspicious. Park, who plans to teach at PUST after his 4-year POSTECH term ends in August, visited Pyongyang last month as part of a PUST delegation. “The atmosphere was friendly,” he says. “The tension was gone.” The Monday prayer group continues, just in case.


    Study Finds Foreign High-Tech Workers Earn Less

    1. Yudhijit Bhattacharjee

    Many U.S. companies say they hire foreign scientists and engineers because of a shortage of qualified native-born workers. But a new salary study bolsters the claim of some analysts that a strong reason may be to hold down wages.

    The study, by B. Lindsay Lowell and Johanna Avato of Georgetown University in Washington, D.C., shows that science, technology, engineering, and mathematics (STEM) workers holding an H-1B—a temporary visa granted to skilled foreign workers—earn 5% less than natives employed in similar positions with similar skills and experience earn. It also shows that H-1B visa holders who don't job-hop make 11% less than natives and that those who enter the workforce after graduating from a U.S. university earn 16% less.

    There is one group of foreigners who do not seem handicapped by their H-1B visa status, however: Those hired directly from overseas—45% of the total—make 14% more than native workers. The study, presented last month before the Population Association of America, uses data collected by the National Science Foundation as part of a 2003 National Survey of College Graduates.

    These findings could influence pending legislation affecting a program that every year admits 65,000 foreign nationals into the U.S. workforce. Business groups want Congress to greatly increase—or, better still, eliminate—the existing ceiling on H-1B visas, arguing that it hurts U.S. competitiveness. The workers, many from India and China, are in such high demand that this month, the government received applications for more than twice the number of slots available next year—on the very first day the applications could be filed.

    Fair market?

    Overall, H-1B visa holders earn 5% less than native-born U.S. workers holding similar jobs. But the difference varies by category of worker.


    Lowell speculates that foreign workers are paid less because they are often compelled to remain with the same employer to get permanent residency within the 6 years of stay allowed by their visas. Lowell says this “de facto bondage”—the residency process, which can take years, starts anew if they change jobs—has the effect of depressing salaries not just for foreign workers but for natives as well.

    One solution, Lowell says, is to grant permanent residency to foreign workers right off the bat, or at least waive the requirement that applicants be sponsored by their employer. Indeed, several bills would grant automatic permanent residency to foreign students graduating from U.S. institutions with advanced STEM degrees (Science, 14 April 2006, p. 177).

    Opponents of high-tech immigration, however, say that the salary differential between H-1B visa holders and natives argues for ending the H-1B program. “Either these foreign temporary workers are not 'the best and the brightest,' or companies are hiring them to hold down starting wages—or both,” says Jack Martin of the Federation for American Immigration Reform in Washington, D.C.


    NSF to Revisit Cost-Sharing Policies

    1. Jeffrey Mervis

    Cost sharing has long been a requirement for many types of competitive grants at the National Science Foundation (NSF). In 2001, for example, institutions pledged more than half a billion dollars to supplement some 3300 NSF-funded projects on their campuses. But despite its value in leveraging federal dollars, cost sharing can also give wealthier institutions an unfair advantage in vying for an award. So in October 2004, NSF decided to eliminate the provision from future program announcements.

    Now NSF's oversight body, the National Science Board, wants to take another look at the issue. Some board members worry that local and state governments, industry, and other nonfederal research partners may lose interest in research collaborations if they don't have a financial stake in the project. “The original idea was to bring in more money, but I think cost sharing is really more about building partnerships,” says Kelvin Droegemeier, a meteorology professor at the University of Oklahoma in Norman who volunteered to lead the board's reexamination. “The institutional buy-in is an important element, and I wonder if the board went too far [in 2004] when we eliminated it.”

    The decision to reopen a long-running debate disturbs some university administrators, who note that federal funding already falls far short of paying for the full cost of academic research. “We had been urging NSF to end [cost sharing] for many years because of our concern about how it was being used in the evaluation process,” explains Anthony DeCrappeo of the Council on Government Relations, a Washington, D.C.-based association of research universities.

    DeCrappeo says grant applicants often suspected a subtle bias from reviewers and program managers in favor of proposals with large institutional commitments. Schools were confused about which programs required cost sharing, he adds. Finally, institutions at times came up with their share by diverting money from existing research activities. Universities spent $8 billion a year on academic research in 2005—more than either companies or state governments, he notes, and only some of which represents federal reimbursement for the cost of supporting research on campus—“and there's no reason to have additional matching requirements.”

    Droegemeier says that the board hopes to collect data on the impact of cost sharing across different NSF programs. He and others are especially concerned about no longer requiring state legislatures to support the Experimental Program to Stimulate Competitive Research operating in 27 states and territories that receive relatively small amounts of NSF funding. “We want to get community feedback,” he says. “But something tells me that [eliminating cost sharing] is not the best way to go.”


    Mysterious, Widespread Obesity Gene Found Through Diabetes Study

    1. Jocelyn Kaiser

    The role that obesity plays in diabetes, cancer, and other diseases makes our expanding waistlines one of today's most pressing health problems. Now, on the genetics front, researchers have nabbed a coveted prize: the first clear-cut evidence for a common gene that helps explain why some people get fat and others stay trim. The British team, led by Andrew Hattersley of Peninsula Medical School in Exeter and Mark McCarthy of Oxford University, doesn't know what this gene, called FTO, does. But adults, and even children, with two copies of a particular FTO variant weighed on average 3 kilograms more than people lacking the variant, the researchers report in a paper published online by Science this week (

    Although twin studies have suggested that obesity has a genetic component, some earlier reports of common obesity genes, including a paper in Science last year (14 April 2006, p. 279), have proved controversial. But this new work, which involved nearly 39,000 people, is solid, says Francis Collins, director of the U.S. National Human Genome Research Institute in Bethesda, Maryland. “There's no question that this is correct.”

    Flab factor.

    A genetic variant appears to affect some people's body weight.


    The U.K. team first found the gene in type 2 diabetes patients participating in a multi-disease study sponsored by the Wellcome Trust, the U.K. biomedical charity. Timothy Frayling in Hattersley's lab and his co-workers first analyzed the genomes of 1924 diabetic and 2938 nondiabetic individuals, looking for which of nearly 500,000 genetic markers were more common in those with diabetes. Those markers helped them home in on a variant, called a single-nucleotide polymorphism, in the FTO gene. The gene, located on chromosome 16, was a surprise: Whereas other known diabetes genes predominantly control insulin production, FTO proved to be associated with body mass index, or BMI (weight divided by height squared)—suggesting that it might control weight in more than just people with diabetes.

    To find out, 41 collaborators looked for the FTO mutation in DNA samples from “literally every single study we could,” says Hattersley, including another two diabetes populations, nine cohorts of white European adults, and two studies of European children. In every one, the FTO mutation was associated with BMI. Overall, about 16% of white adults and children carry two copies of this variant. They are 1.67 times more likely than those lacking any copies to be obese, the group reports.

    The researchers don't know what FTO does. But because FTO may lead to a new pathway for controlling weight, “we'll have people racing to understand” the gene's function, says obesity researcher Jeffrey Flier of Harvard Medical School in Boston. Those studies should help unravel the basic biology of obesity.

    The paper also underscores the importance of tracking down common disease genes in as many groups of people as possible. In the past 2 years, researchers have reported finding common disease genes for age-related macular degeneration, diabetes, and—earlier this month—prostate cancer. However, the finding of another obesity gene, INSIG2, published last year in Science, has held up in only five of nine populations, says co-author Michael Christman of Boston University. The case for FTO's involvement is strengthened by the fact that other obesity gene hunters are finding the FTO polymorphism as well. “[There's] very strong evidence that it's a gene that affects body weight,” says human geneticist David Altshuler of the Broad Institute in Cambridge, Massachusetts. “That's very exciting.”


    Japan Picks Up the 'Innovation' Mantra

    1. Dennis Normile
    Radical overhaul.

    Kiyoshi Kurokawa, science adviser to Japan's prime minister, wants more money for education, but only if universities reform.


    TOKYO—Kiyoshi Kurokawa, science adviser to Japan's prime minister since last fall, doesn't mince words when it comes to talking about what's best for Japan's research and development efforts. “First you have to reform the leading universities,” he says.

    Kurokawa, 70, was offered the job when a phone call “came completely out of the blue” from just-elected Prime Minister Shinzo Abe's office late last September. It was the first time a Japanese prime minister had appointed a science adviser. Kurokawa suspects he caught Abe's attention with his outspoken opinions given while serving on the governmental Council for Science and Technology Policy and as president of the Science Council of Japan. The position is not permanent and could disappear if Abe fails to lead the Liberal Democratic Party to success in elections later this year.

    Kurokawa led the drafting of “Innovation 25,” Abe's vision of how science and technology can contribute to Japan's economic growth out to 2025. Kurokawa laughs about “innovation” being in the title of so many recent science policy manifestos. But he firmly believes in the recommendations, which include making energy and the environment drivers for economic growth, radically increasing funding for education, and reforming Japan's universities.

    University reform is a pet topic for Kurokawa, who rose to be a professor of medicine at the University of California, Los Angeles, before returning to Japan where, after a stint at the University of Tokyo, he became dean of the School of Medicine of Tokai University in Hiratsuka, Kanagawa Prefecture. Below are his edited comments from an interview with Science.

    On innovation:

    The innovation Abe is talking about is not just technological innovation, but social innovation and also nurturing innovative people. Japanese society has to become more conducive to innovation and provide opportunities for risk-taking, adventurous people. It's fine to invest in science and technology. That provides the seeds for [economic] value. But in this globalized age, you really have to compete and deliver the seeds of scientific discovery to the marketplace. That requires social encouragement of entrepreneurial activities.

    On shifting government spending from public works to human resources:

    [The Innovation 25 plan] is a sort of vision statement by the government, and each ministry will be asked to follow this road map. The overall annual budget should have certain objectives. But it is very hard to change [priorities], because each ministry has its own [interests] and their budget remains more or less the same from year to year.

    We could shift public spending more toward human resources rather than infrastructure. But because of the political decision-making process, you have to raise public awareness so that any politician [endorsing a shift] will be supported. As science adviser to the prime minister, I'll try to [do that].

    On reforming Japan's universities:

    At the leading universities, you have to choose when taking the entrance exam [which academic department] you are heading toward. Even within a school of engineering, you have to choose, say, electrical engineering. This means that even by grade 10, students' core studies are shifting depending on whether they want to go into the natural sciences or social sciences or arts and humanities. Why does it have to be this way? Let high school students study whatever they are interested in and get universities to allow more flexible choices. Right now in Japanese society, if it so happens that at age 18 you didn't study [and failed to enter university], there's no second chance. Universities should have more flexible entrance policies.

    For another example, [internationalize the universities] by aiming for 30% of undergraduates to be foreigners. Give them scholarships if need be. The impact of their presence would be to change the mindset of Japanese. There has been talk about Japan becoming a very attractive place for foreign researchers to come for graduate study. Let's start at an earlier age.

    Finally, you have to reform the Japanese hierarchical academic system [in which junior researchers work under department chairs]. That destroys the creativity and independence of younger professors. Under this inbred system, you're just nurturing cloned professors.

    On the scientific community's responsibility to the public:

    People have higher expectations for contributions from the science community because their money is spent on research and development. The public is more informed, and they want more return on their investment—and that's natural. …

    The science community should be accountable for their policy recommendations. Whether the science community becomes trusted by the public depends on doing that. So I think transparency and engagement with the public is very important.


    Chemists Mold Metal Objects From Plastic 'Nanoputty'

    1. Robert F. Service
    Rising star.

    Nanometals can be sculpted into any shape.


    Blacksmiths have molded metals for thousands of years by melting them at ultrahigh temperatures. Now, much like potters transforming clay into ceramics, a group of chemists has found a way to assemble tiny metal particles into a substance that can be shaped and fired—at little more than room temperature. The process creates objects composed of either a single metal or alloys of multiple metals, which could make them well-suited for a raft of applications including catalysis and optics.

    The new work, described on page 261, is drawing high praise. “It's a very nice way to mold particles into whatever shape you want,” says Chuan-Jian Zhong, a chemist at Binghamton University in New York, who describes the work as “excellent.”

    Nanoparticles are the focus of intense research because their tiny size lends them unique electrical, chemical, and optical properties. But when researchers try to join them into assemblies, the particles typically create rigid crystals that can't be reshaped. So Bartosz Grzybowski, a chemist at Northwestern University in Evanston, Illinois, set out to give nanoparticle assemblies a little flexibility. That required striking a very delicate balance. If the nanoparticles bond too readily to each other, each particle winds up linked to all its neighbors, resulting in a tightly knotted ball. But if too few connections are made between particles, the assembly doesn't grow.

    Grzybowski and his colleagues started by creating linkers consisting of long hydrocarbon chains sporting thiol groups at each end that readily bind to metal particles. In the middle of the linkers, they placed azobenzene groups that change their conformation when exposed to ultraviolet light—in this case, switching the linkers from oil-friendly hydrophobic compounds to water-friendly hydrophilic ones.

    The researchers dissolved the linkers in a mixture of an organic solvent and soaplike surfactant and added metal nanoparticles coated with organic compounds abbreviated DDA. As the nanoparticles dispersed through the solution, thiol groups on one end of the linkers displaced weakly binding DDA molecules, glomming onto individual nanoparticles. At this stage, each metal particle was coated by DDA molecules and a few linkers, and surfactants surrounded the linkers' free thiol groups so they did not “see” any of the metal nanoparticles floating nearby. When Grzybowski's team flipped on the UV light, the linkers became hydrophilic and migrated toward one another in the hydrophobic organic solvent. The free thiol groups latched onto nanoparticles on neighboring linkers, creating growing webs of particles.

    The Northwestern team didn't want all these webs to unite, however, because that would lead to a messy precipitate. After some trial and error, they found that if they added just the right amount of nanoparticles, a large number of spherical webs would form, but the particle feedstock ran out before they joined up. Together, these “supraspheres” formed a kind of waxy paste the consistency of putty, which could be molded to form essentially any shape from spheres to gears. Moreover, when they fired their shapes at a modest 50°C, the heat drove off the organics and welded the neighboring nanoparticles together, creating a continuous and porous metal network.

    The Northwestern researchers have already shown that their newly fired metals are electrically conductive. Now they are testing their optical and catalytic properties. If those turn out well, moldable nanometals may end up in everything from catalytic membranes for fuel cells to novel chemical sensors.


    Global Warming Is Changing the World

    1. Richard A. Kerr

    An international climate assessment finds for the first time that humans are altering their world and the life in it by altering climate; looking ahead, global warming's impacts will only worsen

    In early February, the United Nations—sponsored Intergovernmental Panel on Climate Change (IPCC) declared in no uncertain terms that the world is warming and that humans are mostly to blame. Last week, another IPCC working group reported for the first time that humans—through the greenhouse gases we spew into the atmosphere and the resulting climate change—are behind many of the physical and biological changes that media accounts have already associated with global warming. Receding glaciers, early-blooming trees, bleached corals, acidifying oceans, killer heat waves, and butterflies retreating up mountainsides are likely all ultimately responses to the atmosphere's growing burden of greenhouse gases. “Climate change is being felt where people live and by many species,” says geoscientist Michael Oppenheimer of Princeton University, a lead author of the report. “Some changes are making life harder to cope with for people and other species.”

    The latest IPCC report ( sees a bleak future if we humans persist in our ways. The climate impacts, mostly negative, would fall hardest on the poor, developing countries, and flora and fauna—that is, on those least capable of adapting to change. Even the modest climate changes expected in the next few decades will begin to decrease crop productivity at low latitudes, where drying will be concentrated. At the same time, disease and death from heat waves, floods, and drought would increase. Toward midcentury, up to 30% of species would be at increasing risk of extinction.

    Wetter's better?

    Warmer and wetter high latitudes will yield more crops but also more flooding.


    “This stark and succinct assessment of the future … is certainly troubling,” wrote economist and coordinating lead author Gary Yohe of Wesleyan University in Middletown, Connecticut, in an e-mail message from the final meeting of the IPCC working group in Brussels, Belgium. It is now obvious, he says, that even if greenhouse gas emissions are immediately reduced, changes are inevitable. Humans will have to adapt, if we can.

    Toning down the message

    The working group's report had a difficult coming-out party on 6 April. Like the reports from the two other IPCC working groups (WGI—see Science, 9 February, p. 754—and WGIII, due out on 4 May), Working Group II's involved a couple of hundred scientist authors from all six continents analyzing and synthesizing the literature over several years. Reviews by hundreds of experts and governments generated thousands of comments. Twenty chapters totaling 700 printed pages led to a Technical Summary of 80 to 100 pages and a Summary for Policymakers (SPM) of 23 pages. Then came the hard part: the 4-day plenary session in Brussels, which brought together scientists and representatives of 120 governments. There, unanimity among governments is required on every word in the SPM, ostensibly to ensure that the phrasing clearly and faithfully reflects the reviewed science of the chapters.

    This time, there were “bigger bumps than normal,” says climate scientist Stephen Schneider of Stanford University in Palo Alto, California, a coordinating lead author. “It was longer and more painful than usual,” Oppenheimer agrees. Especially as the deadline approached early Friday morning, a few countries—attendees mention coal-rich China and oil-rich Saudi Arabia most often—insisted on substantial changes. Sometimes, the softening of the summary could be taken as a technical adjustment. For example, the SPM draft's “20 to 30% [of] species at increasingly high risk of extinction” as the world warms 1° or 2°C became “Up to 30% of species at increasing risk of extinction.”

    Perhaps the most substantial loss from the draft SPM was in the tables. The plenary session eliminated parts of a table that would have allowed a reader to estimate when in this century the various projected impacts might arrive. Also dropped was an entire table that laid out quantified impacts—such as annual bleaching of the Great Barrier Reef in the relatively near term—in an easily accessible, region-by-region format.

    Toning-down aside, “it's still a decent report,” says Schneider. “There are no key science points that didn't come through in the SPM,” says ecologist Christopher Field of Stanford, a coordinating lead author. And all of the losses from the draft SPM are still available in the Technical Summary and the underlying chapters for the determined reader. However, anyone reading the SPM “should understand that the findings are stated very conservatively,” says Field.


    Impacts, present and future

    Conservative though it may be, the report holds one major first. “For the first time, we concluded anthropogenic warming has had an influence on many physical and biological systems,” says impacts analyst and coordinating lead author Cynthia Rosenzweig of NASA's Goddard Institute for Space Studies in New York City. Media coverage of weird weather and its effects had come to imply that global warming was affecting things both living and inanimate, and individual studies pointed that way too, but no official body had given the link its imprimatur.

    To make it official, IPCC authors considered 29,000 series of observations from 75 studies. Of those series, 89% showed changes—glaciers receding or plants blooming earlier, for example—consistent with a response to warming. Those responses so often fell where greenhouse warming has been greatest that it's “very unlikely” the changes were due to natural variability of climate or of the physical or biological system involved. “It's clear it's not all about future impacts,” says Field. As an example, he cites the decline of more than 20% in snowmelt since 1950 as the U.S. Pacific Northwest has warmed. That puts a squeeze on everything from hydroelectric dams to salmon.

    Like the ongoing effects of global warming, future impacts will vary greatly from region to region. Perhaps the most striking example is shifting precipitation. WGII authors started with WGI's model-based prediction of increasing dryness at low latitudes (the U.S. Southwest and northern Mexico; the Caribbean region, including northeast Brazil; and all around the Mediterranean) and increasing wetness at high latitudes (northern North America and northern Eurasia). They then drew on published studies of the effects of climate change on crops.

    The results of a meta-analysis of 70 modeling studies “are compelling,” says geographer William Easterling of Pennsylvania State University in State College, a coordinating lead author. “It's become very clear that in high latitudes, a warming of 1° to 3°C is beneficial for the major cereals—wheat, corn, and rice. At the same time, in low latitudes, even a little warming—1°C—results in an almost immediate decrease in yield.” In the north, the added water accompanying warming boosts yields, but toward the equator, the added heat is too much for the plants. But “you can't warm the mid-latitudes forever without getting some negative response,” says Easterling. “After a 3°C warming, you get this consistent downturn in cereal yield” even at higher latitudes. A 3°C warming is possible globally late in the century if nothing is done about emissions.

    Other global warming impacts are even more localized. As glaciers melt in the next few decades in places such as the Andes and Himalayas, flooding and rock avalanches will increase at first. Then, as the glaciers continue to recede toward oblivion, water supplies will decrease. Sea-level rise from melting glaciers and ice sheets would flood low-lying coastal areas, threatening tens of millions of people living on the megadeltas of Africa and Asia, such as the Nile and Brahmaputra. Coral lives near its upper limits of temperature, so even modest warming is projected to lead to more frequent bleaching events and widespread mortality. Extreme heat waves would become more frequent and more deadly for people. Warming and drying would encourage forest pests, diseases, and fire, hitting forests harder as larger areas are burned. The IPCC list goes on and on.

    Some of both.

    Global warming will bring more precipitation (bluish) to high latitudes in both winter (left) and summer (right) and less precipitation (reddish) to low latitudes. Projected Patterns of Precipitation Changes


    The report also briefly considers potentially catastrophic climate events. WGI had already found that in this century, the great “conveyor belt” of currents carrying warm water into the chilly far North Atlantic will only slow, not collapse. So Western Europe isn't about to freeze over. In fact, it would warm under the strengthening greenhouse. But WGII still sees likely North Atlantic-wide effects including lower seawater oxygen and changes in fisheries.

    More ominous is the report's discussion of potentially large sea-level rise. The main statement is low-key: “There is medium confidence that at least partial deglaciation of the Greenland ice sheet, and possibly the West Antarctic ice sheet, would occur over a period of time ranging from centuries to millennia for a global average temperature increase of 1-4°C (relative to 1990-2000), causing a contribution to sea level rise of 4-6 m or more.”

    Four to 6 meters of sea-level rise would be globally catastrophic. New Orleans, south Florida, much of Bangladesh, and many major coastal cities would be inundated. Centuries to millennia might seem like plenty of time to deal with this still-uncertain prospect, but the “1-4°C” is a tip-off. Combine that with the table of greenhouse gas-emission scenarios dropped from the SPM, and it is evident that a 1°C warming would in all likelihood arrive by mid-century, assuming no action to cut emissions. A 3°C warming could be here by the end of the century. Although the sluggish ice sheets might not respond completely to that warming for centuries or millennia, before the century is up, the world could be committed to inundation of its lowlying coastal regions.

    The world loses

    So what's the bottom line? WGII did that calculation too. According to the SPM, “Global mean losses could be 1-5% [of] Gross Domestic Product (GDP) for 4°C of warming.” That's a range from significant but bearable to truly burdensome. “There's too much uncertainty in that calculation” to take it too seriously, Yohe says. That's because it is a messy computation involving assumptions about all sorts of factors: how sensitive the climate really is to added greenhouse gases; what people alive today owe to future generations; how to balance the needs of greenhouse gas emitters and climate victims. And the calculation doesn't even include many nonquantifiable impacts, such as ecosystem losses and the conflicts resulting from climate refugees, that could double damage costs. The SPM's bottom line: “The net damage costs of climate change are likely to be significant.”

    Economists are “virtually certain,” however, that whatever the global climate costs prove to be, not everyone will bear them equally. Some people will be exposed to more climate change than others. Some will be more sensitive to it. Some will be less able to adapt to it. And some will suffer on all three accounts. These people might live in countries that lie in low latitudes where drying will predominate. Their economies are likely based largely on agriculture that is susceptible to drought. And they are more likely to be developing countries without the wealth needed to adapt to climate change, say, by building irrigation systems.

    Because such happenstances of geography, climate, and economics make some groups particularly vulnerable, Yohe says, “climate change will impede progress toward meeting Millennium Development Goals”—eight U.N.-sponsored goals, which include eradicating extreme poverty and hunger and ensuring environmental sustainability. “If you don't do something about climate, you're swimming upstream” trying to meet these goals across the world. Fortunately, says Yohe, many of the steps that would help communities adapt to climate change would also help meet the U.N. goals.

    Although the report emphasizes the vulnerability of poorer, developing countries, it foresees no real winners. Every population has vulnerable segments, Oppenheimer points out. In the European heat wave of 2003 that killed perhaps 30,000, it was the elderly. When Hurricane Katrina hit New Orleans, Louisiana, killing 700, it was the poor. Adaptation—building levees in the case of New Orleans—has not worked out all that well so far.

    And no one region seems exempt. In a paper published online by Science on 5 April (, climate modeler Richard Seager of Lamont-Doherty Earth Observatory in Palisades, New York, and his colleagues look at 19 global climate models run for the IPCC. They expect the dryness of the 1930s Dust Bowl to return to the American Southwest by midcentury, for good. If the models are right, the western drought of the past decade is only the beginning. If the world's biggest emitter of greenhouse gases needed some prodding to take action on global warming, this could be it.


    The Education of T Cells

    1. Dan Ferber

    New research on how T cells learn to home in on their targets could lead to selective treatments that boost or dampen immune responses in specific tissues of the body

    Psst, here's the plan.

    When a dendritic cell (top) embraces a T cell (bottom), it activates it and instructs it where to migrate.


    Almost 3 decades ago, a team of immunologists made an intriguing observation. They collected white blood cells called lymphocytes from lymphatic fluid (lymph) that drained the skin or the gut of a healthy sheep, labeled those lymphocytes, and injected them back into the same sheep's bloodstream. To their surprise, the injected cells didn't patrol the whole body: Cells from the skin region returned mostly to the skin, whereas those from the intestine homed mostly back to the gut.

    T cells, the infection-fighting immune cells born in the thymus, were thought to cruise the entire body via the bloodstream and the lymphatic circulation, stopping where they spotted signs of trouble. So how did those sheep T cells know to navigate to and patrol a particular tissue? The question matters because immunologists hope to battle tumors or autoimmune diseases by controlling the cellular immune response in one organ, while leaving the immune system alone elsewhere.

    The first clues to an answer came from Eugene Butcher and Irving Weissman of the Stanford University School of Medicine in Palo Alto, California. In the 1980s, they showed that certain squads of T cells can distinguish between tiny blood vessels near the skin or near the intestine. Then Butcher's team and others identified dozens of cell-surface receptors and soluble signaling chemicals called chemokines that helped those T cells penetrate and patrol particular tissues. In the 1990s, Butcher and other biologists uncovered a molecular code—the unique combination of receptors and chemokines—that directed T cells to, say, the skin or the gut. But one crucial mystery remained: How does a newborn T cell, fresh from the thymus, become programmed, or educated, to express the combination of receptors that will let them home to a particular tissue? “It's a fundamentally important problem in cellular immunology,” says Jeffrey Frelinger of the University of North Carolina, Chapel Hill.

    Over the past 5 years, researchers have begun to crack that mystery. The most recent work, which shows how immune sentinels called dendritic cells instruct T cells where to go, is revealing a layer of intelligence in the body's immune surveillance mechanisms that had gone undetected, say Frelinger and other immunologists. Ultimately, physicians hope to use the emerging understanding of T cell targeting to develop immune-modulating compounds more specific than today's drugs, which for the most part are blunt instruments that can cause serious side effects. Drugs that direct T cells to specific sites could battle tumors, improve vaccines, or ease autoimmune diseases. “One can conceivably generate drugs that interfere with organ-specific [T cell] recruitment without paralyzing immune defenses everywhere else,” says immunologist Ulrich von Andrian of Harvard Medical School (HMS) in Boston.

    T cells on patrol

    When tissue is infected by a foreign agent, its first line of defense is inflammation, the nonspecific response involving pain, redness, heat, and swelling. Then, over several days, the immune system activates squads of T cell clones, lines of cells each of which can latch onto a single bit of pathogen on an infected cell. T cells then neutralize the threat, call for backup from other immune cells, or both.

    T cell activation begins when dendritic cells, octopuslike cells that roam the body's tissues, spot infection and chew up infected cells to obtain antigen—a small piece of a pathogen or tumor that can trigger an immune response. Dendritic cells then travel through the lymphatic ducts to the nearest lymph node, spongelike sacs that serve as regional field stations for the immune system. There the dendritic cells encounter many so-called naïve T cells but only activate for battle the ones bearing receptors that recognize the antigen they carry. The newly vigilant T cells multiply into an army of clones known as effector T cells that can fight infected or rogue cells.

    The effector T cells then move from the lymph nodes through lymphatic vessels to the bloodstream, where they circulate throughout the entire body. But to fight pathogens, they need to find the site of the infection. Immunologists believe that some effector T cells stop in any tissue or organ where there are signs of trouble, or inflammation. But Butcher and others have long concentrated on the more specialized T cells that can home back from the bloodstream to a particular tissue, such as skin or gut.

    By the early 2000s, Butcher and others had uncovered a clever addressing system that targets those tissue-specific T cells to the correct home. These T cells use a fourstep process to exit the bloodstream across the walls of tiny veins called high endothelial venules. Each of the four steps requires either matching pairs of Velcro-like receptors on T cells and the venule walls, or matching pairs of other T cell receptors and chemoattractants, small molecules that make up a tissue's unique chemical scent. If the four correct pairs of receptors and chemoattractants are present in the right combination, the T cell recognizes that it's in the correct tissue, then squeezes through the venule wall to the tissue beyond. Today, Butcher says, the field is starting to ask how a naïve T cell learns to express the correct combination of homing receptors for the gut, skin, or other tissues—a process called T cell education, or imprinting.

    Before immunologists could find out how T cells undergo such imprinting, they had to make sure it really happened in living animals and that the cells were not born “precommitted to homing to gut or skin or joints,” Butcher says. Butcher and Daniel Campbell, now at the University of Washington, Seattle, did that in 2002. They injected mice with millions of identical, fluorescently labeled mouse T cells, all of which had been genetically engineered to recognize an egg-white protein. They immunized the mice with that egg-white protein, then 2 days later, surgically removed lymph nodes and other lymphoid tissue from the gut and the skin. Inside all the lymphoid tissue they examined, the quiescent T cells were being activated into effector T cells that were ready to battle the foreign protein. But T cells found in the gut lymph nodes produced receptors that would help them find their way to the gut itself once they had reentered the bloodstream from the nodes, whereas otherwise identical T cells from the skin lymph nodes produced receptors that would direct them to skin, the researchers reported in the Journal of Experimental Medicine. “Where you get stimulated determines which homing receptors are expressed,” Butcher explains.

    What happens within a tissue's lymph node to program a T cell to migrate from the bloodstream to that tissue? Von Andrian suspected that dendritic cells teach T cells to home to the tissue where those foreign bits are found. That's because dendritic cells are on the scene in lymph nodes, embracing and helping activate the T cells.

    Von Andrian's team purified dendritic cells from lymphoid tissue (lymph nodes or other specialized immune tissue) from three parts of the body: spleen (a central lymphoid organ), skin, and intestine. They incubated each tissue-specific type of dendritic cell in separate petri dishes with naïve T cells. After 5 days, T cells were ready to do battle with pathogens. But in a test-tube experiment, only T cells exposed to dendritic cells from the Peyer's patch, lymphoid tissue in the intestinal wall, migrated toward a gut chemokine.

    Then, to see whether the same thing happened in living animals, the researchers injected mice with fluorescent T cells that had been stimulated by one of the three types of dendritic cells. T cells ended up mostly in the gut when they'd been activated by dendritic cells from gut lymphoid tissue, but not when they'd been activated by dendritic cells from skin lymph nodes, the researchers reported in 2003 in Nature. The same year, immunologist William Agace's team at Lund University in Sweden reported that dendritic cells from mesenteric lymph nodes, another immune site in the gut, also educate T cells they touch to home in on the intestines. Together, the results mean that “antigen-presenting cells from different lymphoid tissues are not equal in terms of the story they're telling,” von Andrian says.

    Since then, immunologists have worked out some of the chapters of that story. In a pivotal 2004 paper in Immunity, Makoto Iwata of the Mitsubishi Kagaku Institute of Life Sciences in Tokyo discovered that vitamin A (retinol), which is abundant in the intestine but scarce in other tissues, plays a key instructional role in T cell homing. In test-tube experiments, they found that dendritic cells from the intestinal lymph nodes convert retinol to retinoic acid, which induces T cells to make gut-homing receptors but not skin-homing receptors. Subsequent animal experiments confirmed the importance of this conversion to T cell homing: Mice starved for vitamin A had far fewer intestinal T cells than mice that consumed enough of the vitamin.

    Recently, Butcher and research scientists Hekla Sigmundsdottir and Junliang Pan and their colleagues probed for a comparable molecular mechanism in the skin. “We wondered if a similar vitamin or metabolite that might be restricted to the skin might imprint skin homing,” Butcher says. Vitamin D, which is mass-produced by skin cells in response to sunlight, “was the obvious candidate,” he adds.

    Butcher's team isolated lymphatic fluid from the skin of sheep, purified dendritic cells from that fluid, and found that the immune cells convert vitamin D3, the sun-induced variant of vitamin D, into its active form. In other test-tube experiments, this activated vitamin D3 induced T cells to make a receptor that helps them follow their nose to a chemoattractant in the epidermis, the skin's outer layer, the team reported in the February issue of Nature Immunology. An evolutionarily related chemoattractant in the gut lures T cells using a different receptor to that tissue, Butcher points out. These studies indicate that dendritic cells can exploit a tissue's unique biochemical fingerprint—its unique mix of metabolites—to educate T cells to patrol that tissue, Butcher says.

    T cells specialized for one tissue can also be retrained to patrol another area, von Andrian, HMS immunologist Rodrigo Mora, and their colleagues reported in 2005 in the Journal of Experimental Medicine. They cocultured T cells for 5 days with dendritic cells from the gut, spleen, or skin, which imprinted T cells for those tissues. They then washed each group of T cells and cultured them with dendritic cells from a different tissue. After 5 more days with their new instructors, “the T cell phenotype would always match the flavor of the dendritic cells they had seen last,” von Andrian says. That ability to reassign T cells to new tissues may give the immune system an important degree of flexibility in fighting infection. If the pathogen stays put, the immune response is concentrated in that tissue, von Andrian says. “But if the pathogen spreads, you have not put all your eggs in one basket.”

    Immunologists have begun investigating whether the T cell's instructors—the dendritic cells—themselves specialize to function in a particular tissue, or whether they simply sense their environment and respond. A definitive answer is not yet in. Butcher's team found data suggesting that dendritic cells have two vitamin D-activating enzymes no matter what tissue they're from, but only in the skin do they have access to the sunlight-produced vitamin. Agace's team, in contrast, has found evidence that at least some dendritic cells are more specialized. In a 2005 study in the Journal of Experimental Medicine, his Swedish team reported evidence of two types of gut dendritic cells: one that has visited the intestinal wall and can train T cells to migrate to the gut, and another, of unknown origins, that can't.

    Steering cells right

    The new work on tissue homing is raising immunologists' hopes of specifically boosting or suppressing immunity in selected tissues. Most autoimmune diseases involve an overactive, self-destructive immune response toward a particular tissue: the pancreas in type 1 diabetes, the central nervous system in multiple sclerosis (MS), the joint in rheumatoid arthritis. Typically, treatments for such diseases dampen the entire immune system and increase the risk of infection. Similarly, stimulating the immune system nonspecifically to fight a tissue-specific tumor can increase the risk for autoimmune side effects.

    That's where the new knowledge of T cell homing can help, Butcher says. Drugs that alter homing are not themselves new; in 1997, Butcher and HMS biochemist Timothy Springer co-founded a biotech company called LeukoSite, which was later bought by Millennium Pharmaceuticals, to develop drugs that block the Velcro-like interactions and molecular sniffing that help T cells find their way into tissues. Many drug and biotech companies are still pursuing that approach, which has produced a U.S. Food and Drug Administration-approved drug for MS and drugs for ulcerative colitis and Crohn's disease that are currently in clinical trials. But blocking a single receptor often fails to prevent T cell entry into tissues because the receptors involved in homing can often fill in for one another.

    Drugs that alter T cell imprinting “might be a way around the problem of redundancy,” Butcher says. Both gut-homing and skin-homing T cells interpret their respective signals, retinoic acid and activated vitamin D, using members of a large family of receptors that sense hormones and metabolites and directly control gene expression. Drugs that stimulate or alter these nuclear-hormone receptors already exist, and some are being tested for autoimmune diseases such as rheumatoid arthritis or psoriasis. That gives researchers a head start, as those drugs might alter the instructions that tell T cells where to migrate, explains Butcher. “The exciting thing about imprinting is that we're just learning about its potential,” he says.

    Back to the front.

    Dendritic cells use a tissue's characteristic metabolite—dietary vitamin A in the gut or sunlight-induced vitamin D in the skin—to educate T cells to follow their nose back to that tissue.


    The recent advances in T cell imprinting also create several possible new ways to fight disease, Agace says. Most pathogens enter the body through the surface, or mucosa, of a particular tissue, which means that a drug that directs T cells to the mucosa could enhance the cellular immune response, making vaccines more effective in warding off intruders. Other compounds could help battle localized tumors. For example, coinjecting lab-grown dendritic cells, which are already used as an antitumor therapy, with compounds modeled on retinoic acid could potentially program T cells to migrate to a gut tumor and boost the treatment's effectiveness, Agace says.

    Retraining T cells could backfire by working too well, caution some immunologists. In a recent clinical trial, the MS drug Tysabri stopped abnormal T cell homing to the brain and eased MS symptoms. But it also suppressed the brain's immune surveillance system so much that a normally benign virus began reproducing in three patients, ultimately killing them.

    What's more, T cells may not take instruction in all tissues, says pulmonary physician Jeffrey Curtis of the University of Michigan, Ann Arbor. Immunologists still debate whether specific squads of T cells are assigned to patrol tissues other than the skin and gut. Researchers have been unable to find a combination of adhesion molecules or chemoattractants that lures specific T cells into lungs, he notes. But physiologist Klaus Ley of the University of Virginia, Charlottesville, who studies T cell migration in lung and blood vessel disease, disagrees: “If I project into the future, we will see more homing specificity—for gut and lung and I hope for [atherosclerotic] blood vessels.”

    The research on T cell homing has also now begun to merge with another hot topic in immunology: regulatory T cells, a much-touted cell type that naturally suppresses autoimmune reactions. Several years ago, Alf Hamann of Charité University of Medicine in Berlin and his colleagues reported that regulatory T cells isolated from different tissues have homing receptors like those that effector T cells sport. Now, in a March online paper in the European Journal of Immunology, they report that these cells, like effector T cells, can be programmed by dendritic cells, an interleukin, and retinoic acid to home to skin or gut. In theory, sub-populations of regulatory cells could therefore be prepared to target a tissue and suppress an autoimmune response. “If you could make a regulatory T cell in vitro and make it go where you want it to go, that's a cool thing,” Butcher says.


    Surveys of Exploding Stars Show One Size Does Not Fit All

    1. Tom Siegfried*
    1. Tom Siegfried is a writer in Los Angeles, California.

    Type Ia supernovae are regular enough that astronomers can use them to measure the universe. But some of the “standard candles” are breaking the theoretical mold


    Computer models show ways stars might explode but not what primes them for the blast.


    SANTA BARBARA, CALIFORNIA—When astronomers wish upon a star, they wish they knew more about how stars explode. In particular, experts on the stellar explosions known as supernovae wonder whether textbook accounts tell the true story—especially for a popular probe of the universe's history, the supernovae designated as type Ia.

    In fact, new observational surveys suggest that cosmic evidence based on type Ia supernovae rests on a less-than-secure theoretical foundation. “We put the theory in the textbooks because it sounds right. But we don't really know it's right, and I think people are beginning to worry,” says Robert Kirshner, a supernova researcher at the Harvard-Smithsonian Center for Astrophysics (CfA) in Cambridge, Massachusetts. “We keep saying the same thing, but the evidence for it doesn't get better, and that's a bad sign.”

    Kirshner was among more than 100 experts on stars and their explosions who gathered to discuss their worries last month at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara.* General agreement emerged that the textbook story “is a little bit of 'the emperor has no clothes,'” as Lars Bildsten, an astrophysicist at the Kavli Institute, put it. “There's a lot of holes in the story.”

    Understanding type Ia supernovae has become an urgent issue in cosmology, as they provide the most compelling evidence that the universe is expanding at an accelerating rate. That acceleration, most cosmologists conclude, implies the existence of a cosmic fluid called “dark energy” that exerts a repulsive force countering gravity.

    In the textbook story, type Ia explosions occur in binary systems where a worn-out star known as a white dwarf siphons matter from a nearby companion. When the planet-sized dwarf accumulates enough mass to exceed the Chandrasekhar limit—about 1.4 times the mass of the sun—its density becomes great enough to ignite thermonuclear fusion, blowing itself to smithereens.

    Because all white dwarfs presumably blow up the same amount of mass, they should all be equally bright at any given distance, and so their apparent brightness should diminish with distance in a predictable way. Faraway type Ia supernovae are dimmer than expected, however, suggesting that the universe's expansion rate has been speeding up.

    But figuring out exactly what dark energy is will require a precise gauge of its effect on the expansion history of the universe. And type Ia supernovae are not yet well enough understood for analysis of their brightness to provide the needed precision, experts say. “We do not know the details,” says Alex Filippenko of the University of California, Berkeley. “There is still a lot of controversy about what exactly is going on in a Ia.”

    Several speakers during the Santa Barbara conference noted problems with the textbook view. For one, astronomers have long realized that not all type Ia's explode with the same brightness. Instead, the brightest are several times as luminous as the dimmest. Type Ia explosions in old, elliptical galaxies appear dimmer, on average, than explosions in younger galaxies. It may be that such differences reflect different pathways leading to explosion, hinting that type Ia supernovae come in two distinct flavors. “There is now very strong evidence that … there are very likely two populations of type Ia supernovae,” said Bildsten.

    Corrections for brightness differences can be made based on the color of the explosion's light and how rapidly it dims. Such fixes were good enough to establish accelerating expansion but not for pinning down dark energy's properties precisely. That will require answers to several nagging questions, including the nature of the white dwarf's companion and the mechanism of the explosion.

    The good news from the conference is that several computer simulations seem to show that a 1.4-solar-mass white dwarf can indeed explode like a bomb, although various models differ in their details. In some models, a wave of fusion burns slowly through the star (a process known as deflagration), ultimately detonating the fast-burning explosion that mimics a hydrogen bomb. In the star, however, the elements fused are carbon and oxygen, the elements believed to make up the bulk of the white dwarf type Ia progenitors.

    Immediate detonation of the entire star in a rapid shock-wave blast is unlikely because it would convert nearly all the material into an isotope of nickel (which eventually decays to form iron). Because intermediate-weight elements (such as silicon) are found in type Ia debris, some of the burning must be slower.

    A deflagration model discussed at the conference by Wolfgang Hillebrandt of the Max Planck Institute for Astrophysics in Garching, Germany, seems able to produce an explosion, but only if deflagration begins at multiple points within the star. Another approach, presented by Don Lamb of the University of Chicago in Illinois, showed how a bubble of fusion beginning inside the star can burst out through its surface and then, confined by the star's gravity, wrap around the star in all directions, until encountering itself on the other side (see first figure). When the fusing material collides with itself, a jet of material fires back down into the star, detonating the full fusion explosion, a new three-dimensional computer simulation shows, confirming the basic picture seen in earlier two-dimensional models.

    What next?

    Uncertainties in supernova surveys could muddle efforts to determine the nature of dark energy—and thus the fate of the universe.


    But, as Kirshner pointed out, simulating an explosion is one thing. It remains to be seen whether the models can replicate the energy and mix of elements actually seen in various type Ia explosions. And these models assume that a 1.4-solar-mass white dwarf is conveniently available and poised to explode, yet nobody knows exactly how white dwarfs reach that point, or whether there are enough of them to account for the observed rate of explosions. In fact, most observed white dwarfs are typically only a little heavier than half the mass of the sun, far below the explosion point.

    In the standard story, white dwarfs reach the mass limit by accreting hydrogen from a companion star. But the accretion must occur at a “just right” rate—too fast, and it will be blown away by smaller explosions before reaching the bomb mass.

    Furthermore, if white dwarfs really explode by accreting hydrogen from a companion, leftover hydrogen should be visible in the supernova remnant. But sensitive observational searches have failed to find the hydrogen. “I think this lack of hydrogen is a very, very serious issue,” said Filippenko.

    The missing hydrogen leads some experts to speculate that the companion star is not an ordinary hydrogen-rich star but something else—perhaps even another white dwarf. But searches find few double-dwarf systems likely to become supernovae. The Supernova Ia Progenitor Survey at the European Southern Observatory in Chile has observed more than 1000 white dwarfs and has found only two double-dwarf systems, Ralf Napiwotzki of the University of Hertfordshire, U.K., said at the conference.

    In one, the total mass of both dwarfs didn't reach the explosion threshold, and they wouldn't merge for 25 billion years, anyway. The other double dwarf falls just short of bomb mass. “At the moment, we can't say we have a clear-cut supernova Ia progenitor,” Napiwotzki said. But deeper searches may find more candidates, he added.

    If double dwarfs do merge and explode, their combined mass could exceed the Chandrasekhar limit, producing an unusually bright explosion. And in fact, one such unusual explosion was spotted in 2003 and reported in Nature last year by the Supernova Legacy Survey, an international project using the Canada-France-Hawaii telescope on Mauna Kea.

    Supernova 2003fg looks like a type Ia, said Andrew Howell of the University of Toronto, Canada, but glows with more than double the median Ia brightness. Its brightness and energy output suggest a combined mass of more than two solar masses, implying (among other possibilities) a double-dwarf explosion or the growth of a single white dwarf to larger than the expected maximum mass. Many experts find it hard to envision a single dwarf growing that fat, but neither has current theory established that the merger of two dwarfs would produce the observed features of a type Ia explosion.

    In any case, freak explosions such as 2003fg are just the sort that could contaminate supernova data needed to determine whether dark energy is the residual energy of empty space incorporated by Einstein into his theory of relativity as a “cosmological constant.” If it is, the ratio of the dark energy's pressure to its density would be exactly -1, at all times and places throughout the universe. (That ratio, known as the equation of state, is negative because the pressure is negative, conferring the dark energy's repulsive effect.)

    If the ratio is greater than -1, dark energy could be a new sort of field, sometimes called quintessence, that changes its strength over time. A ratio less than -1 suggests an entirely weird “phantom” energy that would someday rip the universe to shreds (See figure below and Science, 20 June 2003, p. 1896).

    Current efforts to gauge the equation of state using supernovae are all consistent with -1 but not sensitive enough to detect small deviations. At the conference, Mark Sullivan of the University of Toronto reported a Supernova Legacy Survey analysis of 250 supernovae giving a value of -1.02, but with an error range including -1. Michael Wood-Vasey of CfA, presenting for another supernova survey known as ESSENCE, reported -1.05, based on more than 170 supernovae, but again with uncertainties large enough to include -1.

    Reducing such uncertainties further is a prime goal of several supernova-search satellite missions to probe dark energy that will be competing for funding, as described in last year's Dark Energy Task Force report prepared for NASA, the National Science Foundation, and the Department of Energy (,2006.pdf). But some experts doubt that supernova theory will ever be good enough to identify small deviations from -1, even with thousands of supernovae observed from a dark-energy satellite. (Some of the proposed missions, however, would measure both supernovae and other features, such as gravitational-lensing effects, that could help narrow the uncertainties.)

    In any event, better supernova data could still be useful to cosmologists, Bildsten pointed out. “If there's really two populations, you might decide that one of those populations isn't so good, and if it's in this type of galaxy or that, you don't use it for your cosmology,” he said. “Maybe that's helpful information.”

    But whatever help supernovae can provide will still depend on plugging the worrisome gaps in current textbooks accounts, Kirshner said, and answers to many critical questions remain elusive. “I wouldn't say it's a crisis,” he said. “But if you ask, 'Are the pieces falling into place?' I'd say the answer is no.”

    • *“Paths to Exploding Stars: Accretion and Eruption,” 19-23 March.


    The Plant Breeder and the Pea

    1. Erik Stokstad

    K. B. Saxena has spent his career trying to boost yields of pigeon pea, a crop relied on by hundreds of millions of marginal farmers. At last, he's succeeded

    In bloom.

    K. B. Saxena (right) and colleagues bred countless varieties of pigeon pea to create new hybrids.


    When he decided on his life's work as a plant breeder, K. B. Saxena made an unlikely choice. The year was 1974, and new varieties of rice and wheat were boosting production and cutting hunger around the world. With a newly minted Ph.D. from one of India's top agricultural universities, Saxena could have worked on any of these blockbuster crops. Instead, he picked a gangly, unrefined plant called pigeon pea.

    Although still barely known in the West, pigeon pea (Cajanus cajan) is the main source of protein for more than a billion people in the developing world and a cash crop for countless poor farmers in India, eastern Africa, and the Caribbean. This hardy, deep-rooted plant doesn't require irrigation or nitrogen fertilizer, and it grows well in many kinds of soil. “It's such an important crop, and it had been neglected,” Saxena says.

    During a 30-year career at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) in Patancheru, India, Saxena helped create nearly a dozen kinds of pigeon pea that mature sooner and resist diseases better than do traditional varieties. Yet the big prize—high-yielding hybrids—never seemed within reach. “People had lost hope that yield could improve,” says Saxena, who narrowly escaped being laid off a decade ago and barely managed to keep his program going during hard times at ICRISAT.

    Now, hope is back. Two years ago, Saxena's group finally succeeded in creating the first commercially viable system in the world for producing hybrid legume seed. It couldn't have come at a better time: India faces a pigeon pea shortage severe enough that the government banned exports of it and other so-called pulses last year. Last month, ICRISAT announced that one of its most promising hybrids can achieve yields nearly 50% higher than those of a popular variety. “This will become the forerunner of a pulses revolution in India,” predicts M. S. Swaminathan, a plant breeder considered one of the chief architects of the original green revolution. The first seeds should reach farmers next year, and Swaminathan himself is working on a project to make sure even the poorest can afford them.

    Deep roots

    Saxena was inspired to become a plant breeder when he was in high school. His older brother, a maize breeder, would take him into the research fields and explain what he was doing. “All that stimulation came from my brother,” Saxena says. “He encouraged me a lot.” And with the green revolution at its height, plant breeding was a hot field. After finishing a Ph.D. in cereal grains, Saxena joined ICRISAT in 1974, which had been founded just 2 years earlier to improve five semiarid tropical crops: sorghum, pearl millet, chickpea, groundnut, and pigeon pea.

    There wasn't much competition to work on pigeon peas, Saxena recalls. Crops took 6 to 9 months to mature, slowing the pace of research. And they grew to 2 to 3 meters tall, their pods covered in a sticky gum. “It will spoil all your clothes in an hour,” Saxena says. “No one wanted to work on such a dirty crop.”

    But sensing an opportunity—and loving the dahl made from pigeon peas—he plunged in. By the 1980s, the small team of plant breeders at ICRISAT—together with researchers at the Indian Council of Agricultural Research (ICAR)—had developed early-maturing varieties that can be harvested in only 3 months. That meant an entire crop of nitrogen-fixing pigeon pea can be planted before the wheat crop in northern India, helping to restore fertility to the soil. New varieties also featured improved resistance to fusarium wilt and the dreaded sterility mosaic virus known as “the green plague.” But yields hardly budged, rising to an average of 700 kilograms per hectare.

    The way to smash through the yield barrier is by creating plants with hybrid vigor. This is a well-known phenomenon in which the first generation of offspring exhibit vastly superior traits—yield, or overall health, for example—than those of either parent. The process starts with picking the best plants from each generation and breeding them so that all the progeny of each have dependable traits, then crossing them. This is relatively straightforward and can be done by hand in the greenhouse.

    But making enough hybrid seed to sell requires an easy way to prevent plants of each parent variety from fertilizing themselves. (Each plant carries both male and female sex organs.) Breeders like to create so-called male sterile plants that can't make viable pollen but can still be fertilized by pollen from certain other varieties. In corn and rice, varieties had been bred to produce sterile pollen by the 1980s.

    Breeding sterile plants in pigeon pea and other legumes has proven much more difficult. For starters, the male and female parts exist within the same flower. That means researchers must pollinate the delicate ovaries by hand, and sometimes only a few percent can be successfully fertilized. This and other challenges kept hybrids off the agenda of most legume breeders. “It's theoretically possible, but it's hard to do,” says Harbans Bhardwaj, a plant breeder at Virginia State University, Petersburg, who has worked with pigeon pea.

    Proof of principle came in the late 1980s. Working with ICAR institutions, Saxena and his ICRISAT colleagues found varieties with nuclear genes that conferred male sterility. Hybrids from this line boosted yield by up to 30% and did well in field trials. But the males weren't always sterile—sometimes just a fraction of the male flowers lacked pollen. Because not all the seeds produced from them were high-yielding hybrids, companies were not interested in commercializing the plants.

    Other setbacks put the project in jeopardy, too. In the late 1990s, ICRISAT began to have major budget problems (Science, 2 January 1998, p. 26), and management decided to drop pigeon pea research. Saxena pleaded his case, but the pigeon pea team was still cut deeply; the three other breeders were laid off, as were seven of 10 technicians. After drumming up external funding from companies, Saxena rebuilt the program.

    The big advance came from finding plants with genes in their cytoplasm that confer male sterility. Unlike nuclear genes, which are segregated during meiosis, the cytoplasm is passed down through the eggs to all the progeny, so the offspring of plants with the particular cytoplasmic genes will all be male-sterile.

    Some hints of cytoplasmic male sterility had come early on, when breeders managed to produce plants with sterile pollen by fertilizing wild pigeon peas with pollen from a cultivar. But the plants also had sterile ovaries. Saxena expanded this effort with other wild species. Each year, the group made thousands of experimental crosses and planted the seeds. Every plant was inspected when it flowered—an onerous task, as a hectare can contain up to 60,000 individual plants. In March 1996, a research assistant struck gold when he located plants with no pollen in any of their flowers—a sign of cytoplasmic male sterility. He dashed off on his motorbike to relay the news. “It was very exciting,” says Saxena. “People were really smiling in the field.” Further testing revealed that the sterility was indeed passed on through the cytoplasm.

    Still, success came slowly. Some lines created plants that were male-sterile when grown in winter but would somehow produce fertile pollen in the summer. “Sometimes we got frustrated,” Saxena says. Out of five lines they developed, only one turned out to be stable enough. In order to perpetuate these plants, they also had to find a maintainer line—a nearly identical plant with pollen that would fertilize the plants but yield seed that would grow up into male-sterile plants. The final component was a restorer line, a variety that can fertilize the male-sterile plants so that the progeny will bear seeds. New traits, such as disease resistance, are bred into either or both of the parent lines to produce hybrid seed for farmers.

    Seeing green.

    Extra yield from hybrid varieties of pigeon pea could raise the income of farmers, such as these pea farmers in Eastern Kenya.


    By 2004, the system was up and running (Euphytica, October 2005, p. 289). “This is a breakthrough in plant breeding,” notes Latha Nagarajan, an agricultural economist at the International Food Policy Research Institute in Washington, D.C. “The possibilities are endless.”

    Starter seed

    Of some 300 hybrids tested so far, the best—called ICPH 2671—yields up to 3 tons of pigeon peas per hectare—48% more than a widely used variety known as Murati. “It's a quantum jump in yield,” says Swaminathan, who thinks that yields could even be doubled with improved cultivation and pest management. He notes that legumes require careful attention to phosphorus, and Indian soil is often poor in micronutrients that pigeon pea needs, such as zinc and boron, so educating farmers about soil nutrition will be important.

    In addition to yield, Saxena says that this hybrid also does better against drought and resists diseases better than do the standard lines. “This system that Saxena has developed will benefit the small subsistence farmers and consumers,” says Sharad Phatak, a horticulturist at the University of Georgia, Tifton. If 15% to 20% of the acreage is planted with hybrids, he reckons, it might take care of the pigeon pea shortage.

    There is, however, a downside: Unlike traditional varieties, hybrid seed must be bought every year, because only the first generation has the hybrid vigor. Most of those seeds will come from companies, which makes some observers worry that small farmers won't be able to afford them. ICRISAT has provided the male sterile system, which is in the public domain, to a consortium of 15 Indian seed companies so that they can create their own hybrid pigeon pea varieties. Several companies are also preparing to sell ICPH 2671, and Saxena estimates that the seed will cost about $3.25 a kilo, about 50% more than standard cultivars. He says it's likely that some government agencies will sell the seed at half price to poor farmers.

    Swaminathan isn't taking any chances, however. His foundation, based in Chennai, is beginning a project to train women to produce the hybrid seed themselves from ICRISAT seeds. Beginning in June, agronomists will go to villages about 180 km south of Chennai and teach some 100 women, mostly the wives of subsistence farmers. The goal is for them eventually to sell hybrid seed in their neighborhoods. “The principle is social inclusion and technology access for all,” Swaminathan says. “You can keep the cost of seed low and increase employment in villages.”

    Meanwhile, Saxena, now 58 and 2 years from retirement, spends half his time promoting the sterile lines to breeders at universities and companies, as well as encouraging farmers to try them out. He's also involved in promoting the use of pigeon pea in other countries, such as China, where it's used in several ways but mainly as a quick fix for soil erosion. Although Saxena's work may not trigger the dramatic agricultural revolution that he witnessed at the beginning of his career, it could still improve the lives of hundreds of millions of people.

  12. Boom Time for Monkey Research

    1. Elizabeth Pennisi

    Macaque researchers have blazed a trail of biomedical firsts. Now, with macaque genomic tools at last in hand, this research is rushing ahead in new directions

    Breaking new ground.

    Macaque-specific genomics tools are making studies involving this monkey more useful.


    The rhesus macaque is the unsung hero of the maternity ward. In 1940, Nobel laureate Karl Landsteiner and his student Alexander Weiner discovered in this monkey a blood protein they called the Rh (for Rhesus) factor. Researchers soon found the Rh factor in some but not all humans and realized that a mother could react immunologically against the factor in her fetus. Now a simple test and a vaccine prevent that reaction—and resulting mental retardation or even death in about 20,000 U.S. newborns a year.

    Thanks to Landsteiner, the Rh factor was among the early contributions that this 60-centimeter-tall monkey made to human health. More recently, the macaque has revealed new insights into disorders as diverse as AIDS and depression. But researchers seeking to understand the genetic underpinnings of macaque diseases and behavior have been thwarted. Unlike mice and humans, macaque genetics was virtually unexplored territory until recently, with relatively few genes identified.

    And so about 5 years ago, Jeffrey Rogers and Michael Katze decided it was high time to push macaque biology into the 21st century. Infected by the excitement over the human genome, they decided to go after the macaque genome and to develop the tools to pin down the genes underlying the disorders and behaviors they studied.

    They have succeeded in spades. On page 222, they and their colleagues at Baylor College of Medicine in Houston, Texas, describe the high-quality draft sequence of the rhesus macaque (Macaca mulatta) genome. Katze, a virologist at the University of Washington, Seattle, used this sequence to develop a macaque-specific microarray that reveals the expression of thousands of genes at once. Rogers, a geneticist at the Southwest National Primate Research Center in San Antonio, Texas, has drawn up genetic linkage maps for both baboons and rhesus macaques and is genotyping thousands of macaques in pursuit of specific genes—genes that the sequence will make easier to find. “You now will have the tools and reagents to do in macaque what you can do in humans and in mice,” says Katze. “It will completely transform the rhesus as an animal model in human disease in every way.”

    Down to the genes.

    Studies of infant macaque behavior help researchers better understand the genetic basis of shyness.


    Already, these efforts are leading to a better understanding of how genes are regulated in diseases such as influenza, and what variants of genes are important for certain behaviors. Researchers are also figuring out how macaques differ genetically from humans—a key step in understanding both the value and limitations of these monkeys as surrogates for humans in experimental work. In AIDS research, for example, it's important to find out where gene expression in humans and macaques diverges. Otherwise, “we are never going to get the right cure or the right [drug] target,” says geneticist Timothy Ravasi of the University of California, San Diego.

    Building genomic tools

    The list of “first discovered in” macaques is impressive. For example, using macaques, researchers demonstrated both that a virus caused polio and, ultimately, the efficacy of the polio vaccine. Since 1985, these animals have been sine qua non for HIV studies and vaccine trials because they can be infected by a simian cousin of the HIV virus that causes a progression of disease similar to that in humans. Then too, monkeys behave much more like people than, say, mice or rats, and they have proven to be good stand-ins for humans in neuroscience and behavioral studies. For example, long-term observations of monkeys raised with their parents or just peers led to key insights into the role of mothering in shaping personalities. Back in the 1990s, the first primate embryonic stem cell came from a macaque. All told, researchers publish about 2000 papers a year on macaques, with publicly funded researchers conducting studies on about 40,000 animals and drug companies, many more.

    But without good genetic tools, macaque research could only go so far. Researchers could document an immune response, for example, but not the changes in gene expression associated with that response. They could identify genetically based behaviors—headbanging similar to that in humans occurs naturally in some macaques—but had no way to track down the genes involved, much less knock out a gene. That's a big contrast with mice, in which researchers have characterized genes involved in dozens of human diseases, thanks to mutagenesis and gene-knockout technologies. Researchers have also developed mouse- and human-specific microarrays—chips or glass slides in which probes of short DNA sequences measure the activity of thousands of genes at once and reveal complex gene circuitries. “I envy people who work in human and mouse,” says Shoukhrat Mitalipov, a developmental biologist at the Oregon National Primate Research Center in Beaverton.

    Dozens of macaque researchers have made do with microarrays equipped with human DNA probes, but they have never been sure how well the results represented the monkey's gene activity. The average 3% difference between macaque and human genes means that for some genes the macaque sequence may be invisible to a human-based microarray. “If you want to know exactly what's expressed in monkeys, you have to use monkey sequence,” says Shrikant Mane, a neuroscientist at Yale University.

    Those frustrations drove Katze, Rogers, and their Baylor colleagues to pull together a proposal in 2002 to the National Human Genome Research Institute to sequence the macaque genome. They got the go-ahead in 2005 for the $20 million project, with Baylor's Richard Gibbs leading the sequencing effort and coordinating more than 100 researchers from around the world. As soon as the data started trickling in, Katze teamed up with Agilent Technologies in Santa Clara, California, to put together a macaque microarray based on the new sequences. Several prototypes later, they came up with one with all the macaque's 20,000 genes represented on it.

    At the same time, a group led by Robert Norgren, a neuroscientist at the University of Nebraska Medical Center in Omaha, had started on its own gene-chip design. The researchers first used the human DNA sequence to track down the equivalent sequence in macaque, working with Affymetrix Inc. in Santa Clara, California. As the macaque genome sequence went online, the chip was expanded to cover all the genes.

    Monkey model.

    Rhesus macaques are the most commonly used primates in biomedical research and are useful in studies from AIDS to depression.


    Researchers say both macaque-specific microarrays are quite promising. “We can now do comparative genomics at the level of gene expression. [We can ask] how is the macaque genome being expressed and how is it similar or different from the human,” says Trey Ideker, a genomicist at the University of California, San Diego.

    For Katze, the first task has been to understand how the monkeys react at the genetic level to potentially deadly viral infections. Katze, Yoshihiro Kawaoka of the University of Wisconsin, Madison, and their colleagues infected seven close relatives of the rhesus with a reconstructed version of the flu virus that killed more than 50 million people in the infamous 1918 epidemic; three other monkeys were infected with a modern human flu virus. Three, 6, and 8 days later, they killed a few of the macaques and analyzed their blood and lungs, using microarrays to study gene expression.

    Initially, the macaques' lungs severely overreacted to the 1918 flu virus, the researchers reported in the 18 January issue of Nature. With both viruses, the macaques' first line of defense, the innate immune system, kicked in, with genes for inflammatory molecules revving up. In the macaques battling the modern virus, that reaction was temporary, but in those with the 1918 flu, the genes were not only more active but also active much longer, causing extensive tissue damage.

    To make matters worse, the subsequent ability of the cell to attack the virus was dampened in the monkeys with the more deadly flu. Type 1 interferon proteins typically activate genes for other proteins that inhibit viral replication. But with the 1918 virus, this genetic pathway seemed disturbingly quiet.

    Katze is now one of dozens of infectious-disease researchers using monkey-specific microarrays, including in AIDS research. The microarrays are proving their worth in other disciplines too. To take just one example, neuroendocrinologist Cynthia Bethea of the Oregon National Primate Research Center is using the arrays to delve deeper into the effects of estrogen and progesterone on serotonin, a brain chemical important in mood, appetite, and sex drive.

    She and her colleagues have compared gene expression in serotonin-producing nerve cells in menopausal monkeys with and without hormone treatments. Her unpublished results show that with hormone exposure, “there's a dramatic shift” in a biochemical pathway that leads to enhanced production of serotonin, she says. That pathway involves tryptophan, which these nerve cells can use to make either serotonin or a toxin that destroys the nerve cell. In macaque hormone recipients, Bethea's team finds increases in the gene activity of five enzymes used to convert tryptophan to serotonin and a decrease in five that help produce the toxins. “Our hypothesis is that estrogen and progesterone prevent serotonin neuron cell death and encourage plasticity,” Bethea says.

    To explore this idea further, Bethea wants to coax embryonic stem cells to become specialized serotonin-producing nerve cells in a lab dish. Here too, the microarrays come in handy, as a tool to examine the nerve cells' gene expression. So far, the chips show that Bethea has some work to do: The lab dish neurons still express many genes typically active only in developing neurons.

    Better gene hunts

    At the same time that dozens of researchers are building up a picture of overall gene activity in macaques, Rogers has been working toward tracking down specific genes, taking data from many macaque individuals. Last year, he and his colleagues published a genetic linkage map of the rhesus macaque containing known landmarks or bits of identifiable DNA, places where the sequence varies from one individual to the next. These maps help researchers home in on specific genes when used with family studies. (The sequence itself, in contrast, comes from a single macaque and provides few clues about what varies between individuals.) And thanks to long-term breeding programs for the rhesus macaque, Rogers and his colleagues can work with large families whose genealogies are known or can be determined. The stage is set to do genetic epidemiology, says Rogers.

    Seeing red.

    A microarray study revealed that although both modern and 1918 flu viruses revved up inflammatory genes early on (red), those genes remained dangerously active in the 1918 virus.

    CREDIT: D. KOBASA ET AL., NATURE 445, 319 (2007)

    For example, researchers have long studied various behaviors in macaques, including indicators of anxiety or shyness, such as how long it takes an infant to walk away from its mother and explore new surroundings. Judy Cameron of the Oregon primate center has recorded how infants react to such novel situations and found that the exploratory behavior “is strongly heritable,” says Rogers.

    But until recently, Cameron and others had no way to narrow down where along the macaque's 21 pairs of chromosomes the gene or genes responsible for this behavior are located. Now Rogers and Cameron are using the genetic map to note which landmarks are frequent in infants who are timid or in those who are adventurous, for example. The landmarks will help researchers identify the general vicinity of the genes. Then the team will search the genome sequence at that location for possibly relevant genes and test them. “We're very excited,” says Rogers. “This will provide important new information about the genetics of susceptibility to psychiatric disorders among humans.”

    Other groups are taking a similar tack to uncover genes for vulnerability to stress or risks for neurological and eventually heart and other diseases. Their analyses are just the beginning of a revolution in macaque research. “As we know much more about the genome, we are in a position to do much more sophisticated work in this species,” says immunologist Norman Letvin of Harvard Medical School in Boston. “There will be a great deal of work going forward now that these tools are available.” Landsteiner would be proud.

  13. Genomicists Tackle the Primate Tree

    1. Elizabeth Pennisi

    Primates are taking center stage in genomics, with the macaque serving as an early milestone in understanding our relatives' genomes—and therefore our own

    The deciphering of the human genome was a humbling experience. The promise of the project, in the words of James Watson, was “to find out what being human is.” But even when most of the 3 billion bases of the human genome had been properly placed, much about the sequence defied understanding. Where in the 20,000 human genes uncovered are the ones that set Homo sapiens apart from other mammals, or other primates? To find out, genomicists have been scrambling for more data ever since, most recently from primates. “The goal is to reconstruct the history of every gene in the human genome,” says Evan Eichler, a geneticist at the University of Washington, Seattle. And that requires data from our relatives. DNA from different branches of the primate tree will allow us “to trace back the evolutionary changes that occurred at various time points, leading from the common ancestors of the primate clade to Homo sapiens,” says Bruce Lahn, a human geneticist at the University of Chicago in Illinois.

    In 2005, the unraveling of the chimp genome provided tantalizing hints about differences between us and our closest relative (Science, 2 September 2005, p. 1468). Now on page 222, the third primate genome, that of the rhesus macaque, begins to put the chimp and human genomes into perspective. Macaques are Old World monkeys, which split perhaps 25 million years ago from the ape lineage that led to both chimpanzees and humans (see diagram). So when compared to apes, monkeys can help identify the more primitive genetic variants, allowing researchers to tease out the changes that evolved only in apes. Researchers want to take such analyses back to even more ancient evolutionary divergences, and so seven more primate genome sequences are under way, as is the sequencing of the DNA of two close nonprimate relatives. Together, these genomes “should teach us general principles of primate evolution,” says Lahn.

    A consortium of more than 100 researchers who have been unraveling the macaque genome are detecting genes that have changed faster than expected in the chimp and human lineages; such speed is usually a telltale sign of significance in evolution. They are also finding that dozens of base changes known to put humans at risk for disease also exist in the healthy macaque—but not in the chimpanzee. That suggests that some gene variants implicated in disease are relics of the ancestral primate condition. Such studies “may be the bridge between comparative genomics and evolutionary biology,” says Richard Gibbs, director of the Baylor College of Medicine Human Genome Sequencing Center in Houston, Texas, and coordinator of the rhesus macaque genome project.

    Gibbs and his colleagues are tackling evolutionary biology in reverse. They are identifying key genomic differences without yet knowing how or whether those differences translate into traits that provide survival advantages. Traditionally, researchers have first traced changes in the shapes and sizes of beaks, bodies, brains, and so on, then sought the genes behind them. The hope is that the two modes of inquiry will meet in the middle. But so far researchers have come up short in linking genomic changes to traits subjected to natural selection and other evolutionary forces, ironically because of sparse biological data on nonhuman primates, says glycobiologist Ajit Varki of the University of California, San Diego: “[Without] basic information about the chimp, its physiology, its diseases, its anatomy, you are really very impoverished about what you can say.”

    Complete coverage.

    Researchers plan to eventually sequence the genomes of all these primates and related species, with human, macaque, and chimp now published. The animals are arranged in an artist's rendition of their family tree, with estimated divergence dates in millions of years.


    Beyond mouse

    In 2001, the human genome sequence drove home how little we knew about our genomic selves. About one-third of our genes were complete unknowns. Researchers immediately started lining up our DNA with that of worms, fish, and rodents to see what genes matched up and to try to pin down functions. They found not just genes but also conserved regions within the “junk” DNA that played as critical a role in genome function as the genes themselves. Their finds led to an unquenchable thirst for sequence data as a way to clarify how genomes work. “Every additional species increases our ability to resolve functional from nonfunctional [DNA],” explains Ross Hardison, a molecular biologist at Pennsylvania State University in State College.

    The surprise of the chimp genome, the first nonhuman primate to be sequenced, was the large number of insertions and deletions that differed between humans and our closest living cousins. There were more changes in the order and number of genes and blocks of genes than changes in single base pairs, highlighting the importance of this kind of expansion and shuffling in primate speciation.

    But the chimp data proved frustrating as well, because researchers couldn't put the chimp-human comparisons into an evolutionary context. If humans had one base, say a C, at a position where chimpanzees had a G, researchers had no way of knowing which base represented the ancestral condition. Consequently, there was no way to tell whether the change at that position had occurred only in humans—and therefore perhaps helped define Homo sapiens—or in the chimp. And so in 2005, the National Human Genome Research Institute began stuffing more primates into the sequencing pipeline and approved the $20 million rhesus macaque sequencing project. “It is great to finally have a [distant relative] that allows us to assign differences between the human and chimpanzee genomes to either the human or the chimpanzee evolutionary lineage,” says Svante Pääbo of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany.

    Sequencers aren't stopping at the macaque. Sequencing of many primates, including the orangutan, the gibbon, and a New World monkey, the marmoset, is under way, with promises that the baboon should be next. In 2006, the Wellcome Trust Sanger Institute in Hinxton, U.K., started deciphering the gorilla genome, planning coverage similar to that of the macaque. Meanwhile, genomicists have started sequencing key genes and regulatory regions from other primates, too. “To tell what is human-specific, you need this comparative context,” says Anne D. Yoder, an evolutionary biologist at the Duke Lemur Center in Durham, North Carolina.

    To know a genome

    Already, the primate genomic data are revealing bits of our genetic history. For example, more than 98% of chimp and human bases agree. So researchers hoping to pick out areas with fewer base changes than expected—such as regulatory regions conserved in all apes—are awash in a tide of virtually identical DNA. But when the search is expanded to additional primates, there's more variation in the sequences, and previously undetectable conserved regions, even small regulatory sequences, begin to surface.

    For example, Dario Boffelli, now at Children's Hospital Research Center in Oakland, California, and his colleagues at the Joint Genome Institute in Walnut Creek, California, wanted to understand the regulation of genes that help maintain healthy levels of cholesterol in the body. They looked at 558,000 bases covering genes involved in cholesterol processing, comparing human and six other primates: baboon, colobus monkey, dusky titi, marmoset, owl monkey, and squirrel monkey. They discovered regions with virtually the same sequence in all the primates. Subsequent experiments showed that three of the newly identified conserved regions do indeed regulate genes in the cholesterol pathway, Boffelli's team reported in January in Genome Biology.

    DNA reader.

    Baylor's Richard Gibbs led more than 100 researchers in the sequencing of the macaque genome.


    In other cases, particularly as researchers look for differences that reflect independent evolution, data from even one additional primate can help. In one analysis, the macaque team looked at 64,000 places in the macaque genome where they knew a disease-related mutation existed. In the past, researchers have assumed that such mutations were specific to humans. A few chimp genes had hinted that some problematic bases might predate humans, but the macaque drives home how often this may be the case. Hardison and his Pennsylvania State colleague Webb Miller found more than 200 sites where the macaque had the same base at the same position as the diseased or at-risk human. In 97 instances, both the chimp and the macaque matched the aberrant human base; in 48 cases it was just the chimp. And in 84 cases the rhesus, but not the chimp, matched the diseased human sequence, possibly because chimps also independently evolved away from the ancestral condition at those sites.

    For example, about 1 in 15,000 people have phenylketonuria because their gene for an enzyme needed to process the amino acid phenylalanine is defective. Untreated, the buildup of a toxic byproduct causes mental retardation. In macaques, that same defective gene is the normal condition and has no ill effects. It could be that many “disease” variants in humans are simply ancestral variants “where [a dietary or environmental] change between the human ancestor and the human has made a variant that used to be good, bad,” says Miller.

    In addition, the macaque genome consortium combed the macaque, chimp, and human genomes for families of genes that had expanded in one or more species. A family consists of the original gene and any subsequent copies, many of which evolve slightly different sequences and functions over time.

    One in particular intrigued Miller. This family, called PRAME—short for “preferentially expressed antigen of melanoma” because the genes are activated in melanoma and other types of tumors—has had a complex history in humans. It has at least 26 intact members on chromosome 1. It's one of the regions of the human genome that “are going wild,” says Miller. The chimp has a similarly complex set of PRAME genes, but Miller found just eight PRAME genes in the macaque. “The cluster is very simple [and has] remained stable for millions of years,” he explains. Working from this simpler, presumably ancestral set, he and his colleagues hope to unravel the timing and types of duplications that resulted in the abundance of human PRAME genes.

    Elsewhere in the genome, the consortium found that the macaque has as many as 33 major histocompatibility complex (HLA) genes, more than triple the number in humans. “When you see a dramatic change, it suggests there was some evolutionary selection that favored those extra copies,” says James Sikela, a computational biologist at the University of Colorado Health Sciences Center in Denver. “The tough question is, 'What favored that event?'”

    While Sikela and colleagues ponder the macaque's need for HLA genes, Adam Siepel of Cornell University and his collaborators found other genes in which mutations were apparently favored by selection. Such positive selection, as it is called, typically shows up as bases that have mutated faster than would occur by chance. So Siepel's team compared 10,376 macaque genes with their equivalents in both the chimp and human genomes. They sought genes with a relative mutation rate that was higher in bases that changed the encoded amino acid than in bases that did not alter the coding. The researchers found 178 such genes, “considerably more” than previously identified in human-chimp scans, says Siepel. Some genes, such as a few involved in the formation of hair shafts, were changing rapidly in the three species, possibly because climate change or mate-selection strategies spurred rapid evolution, Siepel speculates.

    Other positively selected genes detected in at least one species included those involved in cell adhesion and cell signaling, as well as genes coding for membrane proteins. “We don't really know enough at this stage to point to a case where we have a really nice story of a difference at the molecular level that we can connect to a known phenotypic difference,” Siepel laments.

    Siepel and others say that such stories require more primate sequences. Evidence of positive selection in the same genes in multiple species will provide more clues to what prompted such rapid evolution. Moreover, researchers can be more confident about labeling a gene as “human-specific” once they have looked in a number of our relatives and not found it. “The more primates one can compare, the better,” says Sikela.

    Sequencing decisions require tough choices about what species to sequence and how thoroughly, however. For his part, Boffelli thinks seven or eight primates would suffice and favors apes over prosimians, the most primitive living primates. With ape DNA, it will be easier to look for positive selection that led to humans. But Yoder thinks it's also important to understand how the whole primate branch has evolved, a point long made by researchers studying anatomy and behavior. “If you are going to understand which genes are primate-specific, you need a pretty broad phylogenetic spectrum, [with] things outside the primate clade but close to it,” she notes. That argument has already brought tree shrews and flying lemurs (which are not lemurs at all) into the picture, with researchers planning a quick skim across the DNA to get a very rough draft sequence.

    Know thy genes.

    The genomes of the gorilla, chimp, orangutan and human (left to right) will help clarify our evolution.


    Others warn that the quick skim, which is also planned for the bushbaby, mouse lemur, and tarsier, might not be enough, however. With anything short of finished sequence, the computer programs may pick up differences—signs of evolution—that in reality may be sequencing errors, warns Miller. That was the lesson of the chimp genome, which initially was not a very polished draft.

    Varki says the genomic work promises to be challenging in other ways, too: “At the genomic level, evolution is extremely messy, involving every conceivable mechanism, probably with lots of blind alleys and red herrings. Deciphering the significance of these molecular changes will be far, far more complicated than I imagined.” Nonetheless, Siepel predicts, “we're going to learn a lot in the next 5 years.”

Log in to view full text

Log in through your institution

Log in through your institution