# News this Week

Science  19 Jun 2009:
Vol. 324, Issue 5934, pp. 1496
1. Swine Flu

# After Delays, WHO Agrees: The 2009 Pandemic Has Begun

1. Jon Cohen and
2. Martin Enserink

In the end, it was an anticlimactic, almost tedious event. When World Health Organization (WHO) chief Margaret Chan declared last week that for the first time in more than 40 years the world is facing an influenza pandemic, she simply stated what everybody already knew. “The scientific criteria for an influenza pandemic have been met,” Chan told reporters in WHO's main assembly hall in Geneva shortly after 5 p.m. local time on 11 June. “I have therefore decided to raise the level of influenza pandemic alert from phase 5 to phase 6,” the highest level in WHO's pandemic alert system.

Many leading influenza scientists and public health experts say that those scientific criteria had been satisfied for several weeks and that WHO postponed its decision unnecessarily. “It's a shame it didn't happen sooner,” says Marc Lipsitch, an epidemiologist at the Harvard School of Public Health in Boston, who calls the debate over the meaning of pandemic “a big distraction.” But WHO says that science wasn't the only factor and that the timing was carefully calibrated to ensure that countries were well-prepared to prevent overreaction. “This was a message to the globe,” says Michael Ryan, director of global alert and response at WHO. “We're operating at the intersection of science, policy, and diplomacy.”

Scientists hope the end of the debate will lead to a renewed focus on mitigating the virus's effects as it washes over the globe. Chan's declaration brings to the fore many important decisions still to come. Chief among them are the mass production and equitable distribution of a vaccine, a key topic during a New York City meeting between Chan and U.N. Secretary-General Ban Ki-moon earlier this week.

WHO had jumped from phase 3 to 4 and then to 5 in less than a week in late April, as the novel H1N1 virus spread widely in Mexico, the United States, and Canada. Phase 6 requires that the virus make similar inroads in communities in another of WHO's six global regions. By mid-May, many epidemiologists thought the United Kingdom, Spain, and Japan met that criterion. But Ryan contends that those were localized outbreaks, often centered around schools and other institutions, and not the generalized onslaught expected during a pandemic.

At the World Health Assembly (WHA) that began on 18 May, several countries warned Chan that calling the outbreak a pandemic could create panic about a virus that did not seem to cause more serious disease than seasonal flu. Because many preparedness plans were written with the much more deadly avian flu strain H5N1 in mind, the declaration might also trigger draconian measures, they said. WHO has since acknowledged that its phasing system needs fixing, as it relies solely on geographic spread and doesn't factor in severity.

Through the first week in June, as cases began to skyrocket in Australia, WHO continued to put off the final call. Although country representatives at WHA had urged Chan to move slowly before declaring phase 6, Ryan insists that no prolonged lobbying campaign occurred. Then on 10 June, Chan led a teleconference with the eight hardest-hit countries (Mexico, the United States, Canada, Chile, Spain, the United Kingdom, Japan, and Australia) about the extent of their outbreaks. Countries such as Japan and the United Kingdom still weren't ready to say they had a full-blown epidemic, Ryan says. But then, Chan turned the question around: Could they be positive that there was not sustained community transmission within their borders? All agreed that they couldn't. At the end of the call, Chan decided to convene the so-called Emergency Committee the next morning. The advisory group supported the move to level 6, which Chan announced hours later.

In delaying the decision, WHO appears to have allowed politics to trump science, says virologist Albert Osterhaus of Erasmus Medical Center in Rotterdam. But other scientists say they understand the difficult situation WHO confronted. “It's realpolitik for someone like Margaret Chan,” says Ian Gust, a veteran flu researcher at the University of Melbourne in Australia. Moreover, the alert system isn't an exact science either, says flu expert Angus Nicoll of the European Centre for Disease Prevention and Control in Stockholm. Under the two-region rule, outbreaks in Thailand and Vietnam—which belong to two different WHO regions—could trigger level 6, whereas similar outbreaks in Canada and Chile, both part of the Americas region, cannot. Ryan says the measured global response last week shows that the balancing act worked: “You don't see people running around like chickens with their heads chopped off.”

The delay had one upside for vaccine makers: Companies are only now completing production of the seasonal flu vaccine for next winter in the Northern Hemisphere. Had phase 6 been announced a month ago, some may have been forced to cut short that production to make way for the pandemic vaccine, jeopardizing the health of millions of people who might be left unprotected from seasonal flu strains.

One big unknown is whether companies will be able to make enough pandemic vaccine to meet the demand. Based on studies and talks with the vaccine industry, WHO has estimated that companies could churn out a maximum of 4.9 billion vaccine doses the first 12 months after they start production—if everything works in their favor. A more conservative estimate, WHO says, is 1 billion to 2 billion vaccine doses in the first year. For either scenario, a so-called antigen-sparing strategy, which uses a vaccine enhancer called an adjuvant to make the vaccine's viral proteins (the “antigen”) trigger stronger immune responses, will be essential. If less antigen is needed per dose, manufacturers can produce more doses in a shorter time.

Many researchers are worried that the adjuvants could slow the regulatory process in the United States. The U.S. Food and Drug Administration licenses vaccines, not adjuvants themselves, but the only adjuvant used in FDA-approved products is an old-fashioned formulation called alum. GlaxoSmithKline and Novartis, two of the largest influenza vaccine makers, both hope to use “next generation” adjuvants, which are included in vaccines already approved for use in many countries. Norman Baylor, director of FDA's office of vaccine research and review, doesn't expect that to pose “a big hurdle,” but he does note that antigen-sparing strategies benefit populations, not individuals. “You have to think about those trade offs,” he says.

Chan called on the world last week to ensure equitable access to the vaccine. The agency is hoping that poor countries can benefit from donations by drug companies, a tiered pricing system, or funding by rich countries. That may be difficult to realize, however, because an unknown number of doses is already tied up through contracts between pharmaceutical companies and countries that can afford them.

How useful—or crucial—the vaccine will be depends in part on how severe the pandemic is. For now, WHO has classified the severity as “moderate,” because it looks nowhere near as severe as H5N1 or the dreaded 1918 pandemic virus. Still, Chan warned last week that nasty surprises could be in store if the virus becomes more pathogenic or develops resistance to drugs that can now stop it. “The virus writes the rules,” she said.

Harvard's Lipsitch worries that people are underestimating the dangers, noting that the novel H1N1 virus unusually leads to severe disease in young people, and many common conditions such as asthma, diabetes, and pregnancy are underlying risk factors for severe disease. Tens of thousands of people could die in age groups for which seasonal flu is rarely lethal, he says. “If school kids start dying in substantial numbers, that will cause people to take notice. But at that point, it's too late.”

2. Public Health

# Expanded U.S. Drug Agency to Control Tobacco

1. Jennifer Couzin-Frankel and
2. Robert Koenig

The U.S. Food and Drug Administration (FDA) is on the cusp of gaining power for the first time to regulate cigarettes and other tobacco products. Last week, the U.S. Senate followed the lead of the House of Representatives, voting 79–17 to give FDA this authority, and President Barack Obama has vowed to sign the legislation. It will allow FDA to ban certain ingredients in cigarettes, push for tougher labeling, and create an FDA Center for Tobacco Products within 90 days after the law takes effect.

A government source, speaking on condition of anonymity because he was not authorized to talk to the press, says that the new center could be staffed by as many as 1000 employees, including several hundred scientists. A user fee paid by tobacco companies will finance the expansion, which includes an advisory committee on which tobacco companies will be able to place nonvoting members.

The bill has its problems, antitobacco activists say. For one, this gives cigarettes “the imprimatur of the FDA,” says Alan Blum, who directs the Center for the Study of Tobacco and Society at the University of Alabama, Tuscaloosa. Public health researchers are also wary because Philip Morris supported the bill. In a statement, the Altria Group, which owns the company, hailed the legislation for providing “important benefits to adult consumers for many years to come.”

Despite the concerns, this action “is long overdue,” says Michael Cummings, a tobacco control expert at the Roswell Park Cancer Institute in Buffalo, New York. “There is no government agency right now that can even tell you what products are being sold where.”

The bill doesn't permit FDA to ban nicotine. But FDA can keep other components out, such as sugary flavorings that increase the appeal of cigarettes. It also allows FDA to strengthen U.S. warning labels—“the worst warning labels in the world,” says Gregory Connolly of the Harvard School of Public Health in Boston, who faults them for not including images of tobacco-driven diseases that have been found to discourage cigarette smoking.

One big unknown is how, exactly, FDA will exercise its newfound authority. Connolly notes a tension in current tobacco-control measures between “product harm reduction”—promoting cigarettes that are deemed less toxic—versus “product use reduction,” or preventing smoking in the first place. The latter, he says, has been shown to reduce tobacco-related death and disease, whereas the former has not. Like others in tobacco control, Connolly is skeptical about “safe” cigarettes and hopes FDA will focus on discouraging smoking and making cigarettes less appealing and less addictive. Cummings expects that FDA will do “its own validation in testing products,” to ensure the tobacco companies don't make misleading claims.

3. U.S. Stem Education

# Report Calls for Grassroots But Comprehensive Changes

1. Jeffrey Mervis

Think globally, act locally. That popular slogan, used by activists of various stripes, captures the spirit of what the latest blue-ribbon panel says is needed to improve U.S. math and science education.

The $1.5 million study,* from the Carnegie Corporation of New York, melds the handwringing in previous reports about the overall state of U.S. education with specific warnings about weaknesses in math and science education. It calls for more rigorous math and science content, improved standards and assessment, better training for teachers, and more innovative schools. Those changes, if adopted, would not only improve math and science, says mathematician Phillip Griffiths, past director of the Institute for Advanced Study in Princeton, New Jersey, and commission chair, but also “do school differently.” But nothing will happen, the report warns, until everyone with a stake in U.S. education—from politicians and business leaders to principals and university professors—gets involved. Unlike in countries with a centralized education system, what happens in the 95,000 schools across the United States is determined largely by local and state budgets, policies, and practices. The federal government acts more like a referee, cheerleader, and trendsetter. For the most part, the report backs a slew of current reform efforts, notably a 46-state consortium hoping to develop a common set of “fewer, clearer, and higher” standards in reading, math, and science. The report plows a bit of new ground, however, with a proposal for an alternative to the standard precalculus pathway in high school mathematics that would focus on statistics, data analysis, and other applied topics. It also calls on reformers to pay more attention to improving science. The A-list of attendees at the report's Washington, D.C., rollout—including Education Secretary Arne Duncan and Representative George Miller (D–CA), chair of the House education panel and the cream of the education policy community—suggests that the report already has the ear of the Administration and leaders of Congress. The commissioners themselves—18 prominent scientists, politicians, and academics, labor, and business leaders (including Bruce Alberts, editor-in-chief of Science)—represent diverse constituencies. “We may be strange bedfellows,” says Ellen Futter, commissioner and president of the American Museum of Natural History in New York City, “but we're united behind the same purpose.” One commissioner, Katherine Ward, says she relished her role as “a voice from the trenches.” An advanced placement biology and biotechnology teacher in San Mateo, California, Ward thinks that the panel's biggest contribution may be its recognition that producing “a generation of STEM-capable students” (the acronym for science, technology, engineering, and mathematics) can't be achieved by fiat.The current federal law requiring all students in grades 3 through 8 to be tested each year in reading and math, known as No Child Left Behind, isn't mentioned by name in the report, but several commissioners remarked that improving STEM education will require much more than simply mandating that students make annual progress on such tests. “When there's a problem with our schools, people usually turn to teachers and ask, ‘Well, what are you going to do about it?’ But many times, the answer lies outside the control of the individual teacher,” says Ward, a 13-year veteran teacher. “Of course, we want access to the latest brain research on how children learn, on what's developmentally appropriate, on what strategies are most effective. But the way schools operate can make it hard to obtain that sort of information, much less to apply it in the classroom. So even though the target is better math and science education, you probably can't achieve it without looking at the entire system.” One approach that the report advocates is to scale up what Duncan calls local “islands of excellence.” One such program, Urban Advantage, utilizes the resources of museums throughout New York City to teach middle school science. Not only does the program “broaden the definition of the school-house,” says Futter, who helped to create it 5 years ago, but it also satisfies a requirement that all eighth-graders in the New York public schools conduct a long-term scientific investigation. Denver and Miami are hoping to launch similar programs this fall, says Futter. Having a good idea isn't always enough, however. Xavier University of Louisiana, a historically black college in New Orleans with fewer than 3000 undergraduates, decided in the 1970s to beef up its STEM curriculum. Now 62% of its students are STEM majors, says its longtime president, commission member Norman Francis, and it sends more African-American students to medical and dental school than any other U.S. university, regardless of size. But that success is more admired than emulated, admits Francis, who also served on the 1983 report, A Nation at Risk: The Imperative for Educational Reform, that famously identified a “rising tide of mediocrity” overtaking U.S. schools. “It's not rocket science, but it takes hard work,” Francis says about Xavier's efforts. “You can't legislate those changes, but you can support them wherever they are taking place. That's what we hope this commission will do.” • * The Opportunity Equation: Transforming Mathematics and Science Education for Citizenship and the Global Economy (opportunityequation.org) 4. Arms Control # Verification Experts Puzzled Over North Korea's Nuclear Test 1. Daniel Clery VIENNA—When North Korea declared on 25 May that it had carried out a second underground nuclear test—a blast that clearly showed up on seismometers across the globe—it seemed to confirm what most observers feared about the country's nuclear ambitions. But at a scientific conference it convened here last week, the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) revealed that its global network of radionuclide detectors, which sniff out faint wind-borne traces of radioactive elements such as xenon, had not picked up anything it could pin on the Korean test. Press reports say that South Korean sensors have also detected nothing, and neither has a U.S. Air Force plane over the East Sea, causing some to speculate that the test was faked with nonnuclear explosives. Yet researchers at the conference weren't buying the conspiracy theory. Seismologist and verification expert Paul Richards of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, acknowledges that it is possible to make a chemical explosion look nuclear up to a certain size, but “it's very difficult and highly implausible. … Personally, I have no doubt [that it was nuclear].” The mystery of the missing radionuclides comes at a time when there's renewed attention on the test ban treaty, thanks to President Barack Obama's campaign pledge to see the United States ratify it. Although the treaty has been signed by 180 nations, it will not come into force until all countries that have nuclear weapons or reactors have adopted it. Nine such states have yet to ratify it, including North Korea and the United States. Nevertheless, CTBTO is building a global network of 337 detector stations from Spitsbergen to Antarctica. Currently more than 70% complete, the network has sensors for seismic signals, airborne radionuclides, and acoustic signals in the oceans and atmosphere. On 25 May, 23 of CTBTO's 40 operational primary seismic stations picked up tremors from Korea and relayed data to the organization's headquarters here. One and a half hours later, CTBTO officials sent a bulletin to treaty signatory governments detailing the time, location, depth, and likely nature of the event. “There could not be a clearer indication of the need for these verification tools,” says CTBTO chief Tibor Tóth. In 2006, when Korea carried out its first nuclear test, a U.S. plane detected radionuclides within days, and a CTBTO station as far away as Yellowknife in Canada picked them up 12 days after the test. This time around, there has been nothing, which is concerning verification experts. “Radioxenon is most likely the only conclusive evidence of a nuclear explosion,” says Anders Ringbom of the Swedish Defence Research Agency in Stockholm. Ringbom says CTBTO's xenon detectors, which can pick up a couple of hundred atoms from a cubic meter of air, “were working fine” in the days following last month's explosion in North Korea. The relevant xenon isotopes have half-lives measured in hours and days, so “the opportunity is … perhaps closing rapidly,” he says. Researchers are speculating that North Korea this time deliberately sought to prevent leakage of radionuclides by, for example, carrying out the test deep underground and in rock that melts easily, forming a seal around the explosion chamber. The blast, which produced a magnitude-4.5 tremor, is thought to be larger than the one in 2006 (magnitude 4.1), and that may also have played a role. “There is evidence from earlier tests that a bigger explosion leads to more melting and so better containment,” says planetary scientist Raymond Jeanloz of the University of California, Berkeley. Verification experts point out that doubts about the test would easily be dispelled by the test ban treaty's ultimate tool: an onsite inspection. But such intrusive investigations can take place only when the treaty is ratified and fully in force. 5. ScienceNOW.org # From Science's Online Daily News Site Warp-Speed Raindrops It's a rain race out there. In the meteorological equivalent of breaking the light-speed barrier, new research shows that the smaller droplets in a rainstorm often surpass what appears to be the speed limit for rain. The findings should help scientists devise models that could lead to more accurate weather forecasts. Stressed Cells Cause Gray Hair. If you've ever blamed your gray hair on stress, you weren't far from the truth. Genotoxic stress—the kind that can damage a cell's DNA—causes hair to whiten over time, according to a new study. The results challenge accepted ideas about how stem cells age and may eventually lead to new ways to prevent graying and treat the more serious conditions caused by genotoxic stress, such as cancer. Lava for Life. Astronomers scanning the skies for another Earth might need to narrow their search. New research suggests that even if a world lies within the Habitable Zone, in which water is liquid, too much or too little volcanic activity can render it lifeless. On the Road to a New Species. Catching one species in the act of becoming two is no easy feat. Yet evolutionary biologists working in the Solomon Islands may have done just that. They have found that a single genetic change turns a small, brown-bellied bird black, possibly leading it to mate with like-colored birds—and setting it on the road to becoming a new species. Read the full postings, comments, and more on sciencenow.sciencemag.org. 6. Law of the Sea # A Final Push to Divvy Up the Sea by All the Rules 1. Richard A. Kerr ANNAPOLIS—Is this the year that the United States finally ratifies an international agreement to conserve and manage the wealth of the ocean? Since coming into effect 15 years ago, the United Nations Convention on the Law of the Sea (UNCLOS) has been guiding how 156 countries settle maritime boundary disputes, watch over their natural resources, and—especially in the Arctic—extend their rights to any riches on or beneath the adjacent seafloor. But the United States doesn't have the legal right to extend its claims or a seat on the commission that reviews the plans of other countries, because it has never ratified UNCLOS. Last week, at a symposium* here on the implications of the dramatic shrinking of summer Arctic sea ice, scientists discussing the results of four summer mapping expeditions shared the podium with policymakers saying that the time has come for the U.S. Senate to act. “I believe it is crucial for the United States to be a party to this Treaty,” said Senator Lisa Murkowski (R–AK) in a statement read at the meeting, adding that this year represents “the best opportunity for Senate accession.” Two U.S. agencies now oversee the spending of about$7 million a year on research aimed at exploring the geographical limits of the country's maritime rights as defined under UNCLOS. That work has gone ahead because the government recognizes the convention as “customary” international law, says marine geologist Larry A. Mayer of the University of New Hampshire, Durham, who codirects the mapping project. He and colleagues are searching for the point at which the sloping pile of sediment washed off the continent peters out, which under UNCLOS's definition marks the boundary of a country's seafloor claims.

The Arctic expeditions have considerably enlarged the area where the United States will claim rights to oil, gas, and other seafloor and sub-seafloor resources. Beyond Barrow, Alaska, in the Chukchi Sea, surveying has pushed the likely boundary northward by more than 185 km from where earlier, sketchy observations would have placed it, Mayer reported. Along the way, researchers have discovered a seafloor scoured by a grounded ice sheet and crustal rock that doesn't fit with conventional ideas of how the Arctic Ocean formed.

Those advances in scientific understanding have yet to be matched in the political realm, however. “The mapping is really quite consequential for us,” Murkowski told reporters this week. “If the treaty is not ratified, we can't make a claim before the [UNCLOS] commission. That's to our detriment; there's so much at stake.”

John Norton Moore, a law professor at the University of Virginia, Charlottesville, says that opponents have held up ratification by using “a series of distortions and untruths repeated over and over again,” including the false claim that ratification would constitute “turning two-thirds of Earth over to the United Nations.” But even with the political stars aligned—support from President Barack Obama, a sympathetic chair of the Senate Foreign Relations Committee, Senator John Kerry (D–MA), and a likely super-majority of the Senate in favor of ratification—the issue has not made it onto the Senate calendar for floor debate.

Proponents of ratification see an opening, but it will not come easily. Senator James Inhofe (R–OK) and others who regard UNCLOS as a threat to U.S. sovereignty have successfully prevented the treaty from coming up for an expedited vote after only limited debate, notes Inhofe spokesperson Jared Young. The other option—a large block of floor time—has proved elusive until now.

Both Murkowski and Inhofe think that's now likely, possibly this fall. To prepare for that opportunity, Murkowski says she's working with Kerry and Senate Foreign Relations Committee ranking minority member Senator Richard Lugar (R–IN) to “see what members we need to encourage.” One key issue is whether the United States is better off not ratifying the treaty and instead choosing which sections to abide by while enjoying the treaty's benefits. Supporters say being a player beats being an outsider.

Murkowski isn't promising victory. “It will require some floor time, which is a precious commodity, especially these days,” she says. And Young says his boss will be ready: “It is more than likely it will come up in the Senate, [where] Senator Inhofe will be a leader against it.”

• * Impacts of an Ice-Diminishing Arctic on Naval & Maritime Operations, 9–11 June, Annapolis, Maryland.

7. China

# After Outcry, Government Backpedals Over Internet-Filtering Software

1. Hao Xin

Last week, the Chinese government provoked a firestorm of criticism when it announced that starting 1 July, all computers sold in China must have a new software program meant to filter pornographic images and block content that offends government sensibilities, such as discussion of the banned sect Falun Gong. But scientists whose work was the basis for text filtering are distancing themselves from the controversial software. And as Science went to press, it appeared that the government may not force computer users to run the “censorware” after all.

China's information ministry spent $6.1 million last year on a 1-year license of an Internet-filtering software that dynamically blocks Web sites by detecting “harmful” images and text for “constructing a green, healthy, harmonious network environment and protecting the healthy growth of youths,” according to the published government procurement notice. After requiring installation of the Lü Ba-Hua Ji Hu Hang (Green Dam-Youth Escort) software on all school computers and on computers sent to the countryside, the ministry had intended to impose protection on all of China's 300 million netizens—who already face government censorship of the Internet. But even state media have turned against Green Dam-Youth Escort, questioning its value. And Chinese lawyers have called for public hearings on the information ministry's directive, which they assert may violate China's antimonopoly law. Netters who have tested Green Dam, developed by the firm Zhengzhou Jin Hui in Henan, say that it blocks many harmless pictures while letting some pornographic ones pass. If Green Dam finds a certain number (preset by the user) of objectionable images on a Web site, it automatically adds the address to its blacklist and blocks access to the site from that computer until it is reset by the password holder. In a test conducted by Science, Green Dam added The Wall Street Journal Web site to the blacklist because it detected too many “harmful” images, including a picture of U.S. Secretary of State Hillary Clinton. The text-filtering component, Youth Escort, “seems to use a more intelligent method,” says J. Alex Halderman, a computer security expert at the University of Michigan, Ann Arbor. It was created by the Beijing Da Zheng Linguistic Knowledge Processing SciTech Company Ltd., a software company spun off from the Institute of Acoustics (IOA) of the Chinese Academy of Sciences. Da Zheng's Web site says the software implements a natural language-processing method developed by IOA's Huang Zengyang in the 1990s. The theory, called Hierarchical Network of Concepts (HNC), posits that different natural languages can be mapped onto a symbolic representation of concepts, enabling machine understanding and translation with an accuracy greater than 90%, says Huang, who is listed on Da Zheng's Web site as chief scientist. But Huang is distancing himself from the company and Youth Escort. “I don't think I have been chief scientist for even a day. That's just a company advertisement,” he says. “They don't listen to me.” Huang's former Ph.D. student, Jin Yao-hong, wrote the original demonstration software as an HNC application and invented “standpoint filtering” in the early 2000s. Jin has described the filtering as “based on the standpoint of the article, filter reactionary speech.” For example, the software can detect whether the phrase “Falun Gong” appears in a complimentary or derogatory context. Da Zheng developed this software into Youth Escort, says Jin, who adds that “how the company will use it is out of our control.” Halderman found that entering “Falun Gong” in combination with certain words in Notepad, a basic Windows text editor, triggered filtering and that the program closed abruptly. At a press briefing last week, foreign ministry spokesperson Qin Gang said that the government's goal is to prevent information harmful to the public from spreading on the Internet. However, earlier this week, an information ministry official told the state-owned newspaper China Daily that people would not be compelled to run the Green Dam-Youth Escort that comes with new computers. The ministry did not respond to requests from Science for an unfiltered explanation. 8. ScienceInsider # From the Science Policy Blog This week, ScienceInsider reported on a plan to build a cleaner coal-powered plant, the elimination of a key scientific instrument on a European mission to Mars, and a demand that HIV-positive people be allowed to attend an international AIDS meeting in the United States. A model electric plant is getting help from the U.S. Department of Energy (DOE). Designed to burn gas from coal and pump carbon dioxide emissions into geological reservoirs, FutureGen II could cost$2 billion or more. Last week, DOE pledged up to $1 billion for the Illinois-based project, pending private backing. It's still early in the 2010 U.S. budget cycle but sobering all the same: House of Representatives appropriators disapproved of plans by the National Science Foundation to spend$100 million on its Major Research Instrumentation program, saying the program has enough money for now. Legislators also shrank by two-thirds the requested funding for advanced technological education at community colleges.

Researchers who accept U.S. grants should be held to rigorous standards for disclosing outside income—such as from drug companies—according to a couple of heavyweight institutions. The Association of American Medical Colleges and the Association of American Universities told the National Institutes of Health last week that far too little income is being reported. But at the same time, they warned of “over-zealous” regulation.

Canadian academics want science minister Gary Goodyear to resign over his latest statements involving religious views. In March, Goodyear linked possible doubts about evolution to his beliefs. This time around, the minister suggested that a government funding body review its support for an upcoming conference on prospects for peace in the Middle East after pro-Israel groups complained that the list of speakers favored the Palestinian position.

Stay on top of the latest science policy news at blogs.sciencemag.org/scienceinsider.

9. Europe

# Italy's MIT Grows, And So Does Controversy Over It

1. Laura Margottini*

Everyone wants to emulate the Massachusetts Institute of Technology (MIT), but Italy is discovering that not everyone agrees on how to do it.

Six years ago, the Italian government launched the Italian Institute of Technology (IIT) with the grand goal of using scientific and engineering research to boost the country's struggling economy. It was established as a unique public-private research foundation, with government funding of about €50 million to €100 million a year for a decade—a huge investment for a country where researchers complain of chronic underfunding.

On the surface, the institute is growing up nicely. It now employs 380 scientists, based in a newly renovated massive lab building outside Genoa, in external research centers at nine Italian universities, and in IIT-affiliated labs abroad. “We've been able to attract researchers from 38 countries,” says Roberto Cingolani, IIT's scientific director. “There is no other institution in Italy with such an international environment.”

Last month, IIT released its second 3-year strategic plan, outlining goals of awarding Ph.D.s independent of Italy's universities and expanding from its research themes of neuroscience, robotics, nanotechnology, and drug development into areas such as energy technologies and smart materials. The plan notes that almost 400 peer-reviewed articles have been credited to IIT and boasts: “We can conclude to date that IIT has filled all its promises.”

Not so, say the institute's critics. IIT was expected to partner with Italian industry, but not a single Italian company has funded research with it so far, Cingolani confirmed to Science. And although Cingolani points to a string of positive evaluations by IIT's own scientific committee, the Italian government has declined to release a recent independent assessment of IIT that, according to its authors, is highly critical. “IIT is a fascinating idea and one of the biggest investments the Italian government ever made in science,” says theoretical physicist Mario Rasetti of the Polytechnic University of Turin, one of the authors. “However, the way it is directed is leading it into a big fog.”

The Italian media have begun to ask questions. Last week, the major newspaper L'Espresso reported that only €108 million of the €518 million allocated to IIT since 2004 have been spent so far. Cingolani, however, explains that IIT spent relatively little during its initial design years, building its endowment for when it would have a full roster of scientists to fund. And he says that some of the money noted by L'Espresso wasn't originally budgeted to IIT, as it came from another foundation the government just closed.

IIT has been controversial from its early days. The idea originated within Italy's treasury department in 2003, back when Silvio Berlusconi was the country's prime minister, as he is once again. Giulio Tremonti, the current Italian minister of economy and finance who held the same role at that time, envisioned something like MIT that would attract industrial investments and world-class scientists. Two dozen international scientists and industrial managers, including four Nobel Prize winners, were asked to help design IIT.

But several of those scientists told Science that IIT officials ignored their calls for international competitions to pick the facility's research themes and scientists. “Our proposals were never discussed nor acted on,” recalls Hans Wigzell, former president of the Karolinska Institute in Sweden. “We felt like hostages there. We weren't listened to at all.” Francesco Salamini, director of the Plant Breeding department at Max Planck Institute for Plant Breeding Research in Germany, says “We felt we were just icons to be displayed but not asked for a real contribution.”

The IIT assessment authoreds by Rasetti and Elio Raviola, a neurologist at Harvard Medical School in Boston, was commissioned last year by former Minister of Treasury Tommaso Padoa Schioppa—he was replaced when Berlusconi returned to power. The report, which involved extensive site visits, evaluated the overall scientific activity of IIT. Cingolani says he considers it a positive evaluation, but Rasetti says he and Raviola judged IIT as underperforming.

Three of IIT's main areas of research—neuroscience, robotics, and nanotechnology—are intended to merge results and focus on the creation of intelligent machines, in particular humanoid robots and high-tech bioelectronic interfaces such as those between the brain and prostheses or artificial sensors. IIT researchers, for example, have helped create a humanoid robot called iCub produced as part of European Union-wide collaboration. Yet Rasetti says he and Raviola concluded that there is little coordination among the three research areas and that much of IIT's research is outside the areas laid out in the institute's plans.

IIT's management structure has also raised eyebrows. IIT's president, Vittorio Grilli, is the director general of the Italian treasury. “Grilli is in charge of allocating money to the same institution he chairs,” says Rasetti. “This cannot be accepted.”

IIT has also been criticized for padding its scientific roster with big names who primarily work elsewhere, much as some have accused China of doing (Science, 22 September 2006, p. 1721). “This situation penalizes young researchers [at IIT] who are left alone, without any leadership. I think one person shouldn't lead two research groups at the same time,” says Rasetti.“We consider this as a major problem. This situation won't help IIT to reach top-quality research.”

Cingolani, however, contends IIT's “brilliant scientists” have already achieved that goal. Whether IIT will ever compare to MIT is an open question, but there appears little doubt Italian scientists will continue to argue about the contentious and costly venture.

• * Laura Margottini is a freelance writer based in London.

10. American Astronomical Society 214th Meeting, 7–11 June 2009, Pasadena, California

# Dark-Matter Model Multiplies Mass of Galactic Black Holes

1. Yudhijit Bhattacharjee

Distant quasars harbor at their centers black holes as massive as 10 billion suns. The black holes in the middle of nearby galaxies are featherweights by comparison. Why the difference? According to new research presented at the meeting, astronomers may have been systematically underestimating nearby black hole masses by a factor of two to three.

Researchers figure out the mass of a galaxy by measuring the velocity of stars orbiting within it and calculating the amount of gravity required to keep them moving. The galaxy's light provides a mass measure for all the stars in it. Subtracting that from the galactic mass gives the mass of the central black hole.

Karl Gebhardt of the University of Texas (UT), Austin, and Jens Thomas of the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, applied new computer models to work out the mass of the black hole in the middle of M87, a giant galaxy relatively near our Milky Way. Unlike previous estimates, their work factored in the dark halo—the invisible sphere of dark matter surrounding the galaxy. Gebhardt says he and other researchers had always assumed the halo wasn't important for the calculation.

It was. Running the model on a powerful supercomputer called Lonestar at UT Austin, the team found the black hole's mass to be 6.4 billion times the mass of our sun, between two and three times previous estimates. The magnitude of the effect “caught us off guard,” says Gebhardt.

Earlier models underestimated the effects of dark matter on the outskirts of a halo, says Avi Loeb of Harvard University. As a result, they assumed the wrong “mass-to-light ratio”—a crucial factor for calculating a galaxy's mass. Including the dark matter reduces the estimated mass of the galaxy's stars and confers more mass on the black hole. “With the new estimate for the black hole mass in M87, M87 is now closer to being a remnant of a luminous quasar,” Loeb says.

“Knowing just how big black holes within galaxies are” will have a fundamental impact on “efforts to understand how galaxies grow and change,” says Tod Lauer, an astronomer at the National Optical Astronomy Observatory in Tucson, Arizona.

11. American Astronomical Society 214th Meeting, 7–11 June 2009, Pasadena, California

# Star Studies Yield Better Yardsticks

1. Yudhijit Bhattacharjee

Measuring cosmic distances is a tricky business. At the meeting, researchers presented findings that could make the job easier and help refine estimates of the rate of expansion of the universe.

Astronomers determine the distance to galaxies by looking at stellar objects of known brightness, often called standard candles. Comparing their actual brightness with their apparent brightness from Earth reveals how far they are.

Jonathan Bird, a graduate student at Ohio State University, Columbus, reported results that could triple the distances measurable using standard candles known as Cepheid stars. Cepheids, which brighten and dim over a time period proportional to their intrinsic luminosity, are trusted mile-markers of distances up to 100 million light-years. But at periods longer than 80 to 100 days, the correlation between brightness and period breaks down, making their actual luminosity difficult to figure out. That's too bad for astronomers because Ultra Long Period (ULP) Cepheids are bright enough to be observed at distances of up to 300 million light-years.

Combing the literature, Bird and colleagues found 18 ULP Cepheids in nearby galaxies. Because the galaxies' distances are known, the researchers could calculate the intrinsic luminosities of the Cepheids. They all turned out to be similar, regardless of period. “Cepheids that fall off the correlation between period and brightness seem to fall into the same place,” says Bird. Now researchers can use distant Cepheids as standard candles, he says.

James Braatz, a researcher at the National Radio Astronomy Observatory, unveiled another signpost: a galaxy called UGC 3789 that he and colleagues determined to be about 160 million light-years away. What makes the galaxy special is the presence of water molecules that act as a maser, amplifying radio waves emanating from the gas clouds rotating around the galactic center.

The maser effect allowed Braatz and colleagues to map the galactic disk and infer the rotational velocity of the gas clouds from the Doppler shift in their spectra. In follow-up observations, the researchers derived the acceleration of the clouds by watching their velocities change over time. Combining their data, they were able to get the diameter of the galactic disk, which, compared with the disk's apparent size from Earth, provided the distance to the galaxy.

The findings could lead to better estimates of the Hubble constant, which gauges how quickly the universe is flying apart, says Adam Riess, an astrophysicist at Johns Hopkins University in Baltimore, Maryland. “It would be fantastic to f ind more ULP Cepheids,” he says, noting that “they are pretty rare.”

12. Genetics

# The Promise of a Cure: 20 Years and Counting

1. Jennifer Couzin-Frankel

The discovery of the cystic fibrosis gene brought big hopes for gene-based medicine; although a lot has been achieved over 2 decades, the payoff remains just around the corner.

The gene hunt began quietly, with few theatrics and much uncertainty.

For Mitch Drumm, the starting gate lifted in the fall of 1985. He and geneticist Francis Collins met on opposite sides of a volleyball net, during a faculty-student mixer at the University of Michigan, Ann Arbor. Drumm, shorter than the lanky Collins, was outmatched in volleyball. But Collins quickly recruited Drumm to join the lab he was setting up, as its first graduate student. There, Drumm began experimenting with a gene-hunting technique Collins had developed. As a test case, they chose cystic fibrosis (CF), an inherited disease in which sticky mucus accumulates in the lungs and elsewhere, eventually killing the patient. At the time, life expectancy hovered in the early 20s.

Coincidentally, CF had been on Drumm's mind. Just months before, the infant son of his family's next-door neighbors, close friends in New Philadelphia, Ohio, had been diagnosed with the disease. Drumm still recalls the phone call from his mother relaying the devastating news. Like many others studying CF, he became immersed in the field by a personal connection, which carried him through ups and downs in the decades ahead. A big triumph came nearly 4 years after signing up with Collins, in the spring of 1989. In collaboration with a large research group in Toronto, Canada, that had started an aggressive chase for the CF gene years earlier, the team cloned the CF gene—called the cystic fibrosis transmembrane conductance regulator (CFTR)—and nailed a crucial, disease-causing mutation (Science, 8 September 1989, pp. 1059, 1066, 1073).

Everyone in the CF community recalls the electric moment when they heard the news. “I remember seeing it roll off the fax machine, gathering people in the lab, and thinking, ‘What did we need to know’” now? says Michael Welsh, a pulmonary physician at the University of Iowa, Iowa City. Most believed that the disease had grown vastly less complex overnight and would soon be eliminated, probably by gene therapy.

On the 20th anniversary of the identification of the CF gene, as new gene discoveries pile up weekly and hype over the power of genes to transform medicine flows fast, CF offers an object lesson in how difficult it is, and how long it takes, to convert genetic knowledge into treatments. Every CF expert agrees that the gene discovery transformed their understanding of the disease's pathology. But even after so much hard work, not a single therapy based on the CF gene has reached the market. Some promising treatments, especially gene therapy, have proven bitterly disappointing.

“We were naïve,” says Johanna Rommens, who at the time was a postdoc in Lap-Chee Tsui's lab at the Hospital for Sick Children in Toronto, the counterpart to Collins's group in Michigan. In her 20s and relatively new to science back then, Rommens couldn't imagine a problem that defied resolution. “I thought I could do anything,” she says. “I sometimes feel discouraged that this was so hard.” Keen to experiment with other genetic diseases, Rommens subsequently left the CF field.

Although gene therapy hasn't paid off, prospects have improved for those with CF: Their median life expectancy has stretched almost 10 more years and now exceeds 37. This is thanks not to genetic knowledge, however, but to more aggressive and earlier treatment to keep the lungs clear.

## Soaring hopes

In October 1989, a month after the CF gene was published in Science, gene therapist James Wilson strode to the speaker's podium at a Florida cystic fibrosis conference to discuss prospects for gene therapy. Thousands of people—physicians, scientists, families—packed the meeting. “I get shivers talking about it right now,” says Wilson, who then worked down the hall from Collins at Michigan and is now at the University of Pennsylvania. “The excitement was palpable. I have never felt energy like that ever before.”

There was broad consensus that the time for CF gene therapy had arrived. Two advances buoyed hopes that this new technique, still in its infancy, would eliminate CF. First, scientists had managed to “cure” the disease in test tubes. They introduced a normal version of CFTR into cells from CF patients, compensating for a defective gene that produced no protein, or none the cell could use. In addition another researcher, W. French Anderson, then at the National Institutes of Health, began the first-ever clinical trial of gene therapy to treat an immune deficiency syndrome, demonstrating that gene transfer in people was feasible. By then, in the fall of 1990, says Wilson, “expectations for the success of gene therapy for CF were as high as I've ever seen for any disease, under any circumstances, in the 20 years I've been involved in this.”

Looking back, many CF experts consider an excessive focus on gene therapy in the early years to have been a big mistake. Like an investor who gambles much of his or her fortune on a single stock, “people kind of stopped doing the other things they were doing” and turned instead to strategies for getting the gene into lung cells, says Raymond Frizzell, a physiologist at the University of Pittsburgh in Pennsylvania.

Early successes in basic CF research also convinced scientists that new treatments were right around the corner. In addition to correcting the gene defect in cells, having CFTR in hand revealed that the healthy CFTR protein was an ion channel that helped transport chloride and control the movement of water across cell membranes. Researchers determined that several mutations led to a protein-folding defect—one of the first protein-folding diseases identified. Then in 1992, Welsh and his colleagues published a paper in Nature explaining that the protein-folding problem could be corrected by chilling the cells. Although this tactic wasn't useable in patients, it allowed “people to say, ‘It's misfolded but you can overcome that,’” Welsh says.

Even with these heartening advances in the early 1990s, there were hints that choppy waters lay ahead. The CF protein was a bear to work with because it didn't respond well to classic analytical tools like Western blots and antibody assays, recalls Margarida Amaral, now on sabbatical at the European Molecular Biology Laboratory in Heidelberg, Germany, who worked on the protein at the University of Lisbon in Portugal. “Nothing seemed to work.”

In Wilson's lab, meanwhile, postdoc John Engelhardt, now at the University of Iowa, was running into difficulties getting gene therapy to work. The gene's expression varied wildly depending on which parts of the lung researchers examined. One great appeal of gene therapy for CF was that a vector carrying a working CF gene could easily be introduced into the lung with aerosols. But Engelhardt found that only about 1% of cells lining the lung's airway—the cells that come into contact with aerosols—boasted high levels of CFTR protein.

Each advance provoked more questions. When Richard Boucher, an adult pulmonologist at the University of North Carolina, Chapel Hill, won a three-way race to create the first mouse model of CF in 1992, he and others were dismayed to find that the rodents didn't mimic human disease. They shared the gut afflictions of CF patients, who must take pancreatic enzymes for life to break down thick secretions. But the lungs of CF mice, unlike those of their human counterparts, were healthy.

Like Drumm, Boucher traces his passion for CF to a personal experience: His daughter suffered several bouts of pneumonia as a baby and was suspected of having CF. Panicked, he read up on the disease; this drew him to a career teasing apart its mysteries. The CF mouse was a big one: Why were its lungs clear? Boucher and others determined that the animals had a second chloride channel that was unaffected by CF. This led to a new understanding of how the airway surfaces stayed healthy: As long as chloride could pass through, the lungs fared well. But in other ways, the mouse proved of virtually no value. It would be 15 years before other researchers found a better animal model.

Meanwhile, gene therapy plowed ahead. In 1993, a 23-year-old became the first CF patient to receive a dose of healthy CFTR by gene transfer. Three small clinical trials began: one led by Wilson, one by Welsh, and one by Ronald Crystal at Weill Cornell Medical College in New York City. “Boy, there were all kinds of issues,” Wilson recalls. Among them: potent fears that the virus carrying healthy CFTR into a patient's nasal passages or lungs would recombine with another virus, be shed by that patient, and “create an environmental catastrophe.” Volunteers in the trials were put in strict isolation.

The bigger issue, as it turned out, was that gene therapy simply didn't work. Few lung cells took up the gene. There were also concerns about inflammation, as the lung rebelled against a viral intruder. “You had to confront the reality of eons of evolution” that had built barriers against toxins and infections, says Boucher, who also worked in CF gene therapy. Researchers in the United States spent several more years trying to get around this, tinkering with gene therapy in baboons, rhesus macaques, and other animals, before largely giving up.

## Shifting gears

Although the trials failed to help CF patients, they mattered to clinical research: For the first time, viral vectors were injected directly into a patient (as opposed to affected cells being removed, modified, and reinfused), and this became the new model for a nascent specialty. The CF trials also underscored the problem of immune reactions to treatment, which hadn't previously been appreciated, says Wilson.

In a funny way, “science has benefited more from the CF gene than CF has benefited from the science,” says John Riordan, a biochemist and, with Tsui and Collins, one of the co-discoverers of the CF gene when he worked at the Hospital for Sick Children. Now at the University of North Carolina, Chapel Hill, Riordan never thought he'd still be working on CF 20 years later. But he points out that CFTR, which belongs to a large family of membrane proteins, is unusual, using several different mechanisms to carry out its functions. As the years passed, biologists studying CFTR learned much about how chloride is transported across cells, and that the protein may also influence inflammation, cell signaling, and other processes. They found that cells build complexes of CFTR and other proteins to keep the system humming.

But what about a cure for this genetic disease, for which there'd been such high hopes? By 1998 or so, researchers knew far more about CF than they had 10 years before. They knew, for example, that hundreds of different mutations in CFTR could cause the disease and that not all disabled the protein in the same way—suggesting that different treatments might be needed for different patients. They knew, too, that CFTR couldn't explain everything. Some severely affected 12-year-olds needed lung transplants, and some 28-year-olds were running marathons—even when the quirk in their CFTR gene was identical. This led researchers to consider that other genes also play a role in CF, as do environmental factors.

None of this was quickly leading to new treatments. “1997, 1998 was really the point where we said, ‘Academics are great, but if we really want to discover drugs, we've got to become more businesslike,’” says Robert Beall, president and CEO of the Cystic Fibrosis Foundation. The CF Foundation had been instrumental in funding the gene hunt and subsequent CF research, raising and investing tens of millions of dollars. In the late 1990s, Beall began shopping around plans to develop small-molecule, traditional drugs—back to basics after gene therapy had failed.

“A lot of people thought that Bob Beall was going far out on a limb to put a lot of money into a strategy that was clearly risky,” says Collins. Multiple drugs might be needed to tackle different CFTR defects. Companies apparently were wary, too. Beall telephoned seven; two called back. One was Aurora Biosciences in San Diego, California, which was bought by Vertex Pharmaceuticals in 2001. It agreed, with generous support from the CF Foundation, to see what it could do.

## Progress at last

More than $75 million and another 10 years later, two Vertex drugs are taking the CF world by storm. One, VX-809, is designed for the most common CF mutation and helps CFTR get to the surface of the cell. Only safety data are available on VX-809 so far. The other Vertex drug, VX-770, aims to boost the function of CFTR protein that's already made its way to the cell surface—which would help in at least one of the CF mutations, accounting for a few percent of cases of the disease. Last October, Vertex reported that in a phase II trial, lung function of volunteers improved by 12% in 4 weeks of treatment. “That's more than any drug ever improved the disease” in any span of time, says Beall. The excitement around Vertex is so great that at a recent CF fundraising walk, organizers gave to Vertex employees the bib numbers 770 and 809, says Paul Negulescu, a vice president of research at the company's San Diego office. And everyone knew what those numbers meant. The CF field has enjoyed other recent breakthroughs. In September 2008, Welsh and his colleagues described a CF pig model in Science—the first animal model that closely resembles human CF. More recently at Iowa, Engelhardt, who worked in Wilson's lab in the old days and also collaborated on the pig, has developed a CF ferret, the culmination of 10 years' work. (The group spent more than 2 years just trying to understand ferret lung biology.) “Not having good animal models has really slowed the field down,” says Engelhardt. Therapies can have harmful side effects, so “when you think about treating a kid before they have overt disease, you've got to be pretty sure you've got a great treatment.” Testing in animals offers some reassurance. Gene therapy, too, is experiencing a resurgence. In the United Kingdom, a team of 80 investigators is launching a 100-person trial using fat particles—unlike the viral vectors in earlier U.S. studies—to carry CFTR to cells. The U.K.'s Cystic Fibrosis Trust has dedicated considerable effort, and$50 million, to gene therapy. “Someone needs to find out” if this works, says Eric Alton, a gene therapist at Imperial College London who's heading up the trial, which he hopes will reveal how distant the goal is. “We're either sitting on the therapy, or we're a million miles away from it.” Despite earlier setbacks, Alton still feels that gene therapy offers more hope than drugs, because in theory at least, it's more comprehensive. Researchers have found at least 20 functions for the CFTR protein. A drug can correct only one or two at once—whereas gene transfer, if it works, can do it all. Results fromAlton's trial, which is just beginning, will come in 2012.

There have been other developments: Prenatal testing is increasingly offered to couples contemplating pregnancy, potentially reducing the number of babies born with CF, although figures are difficult to come by. Forty-seven states and many countries now test newborns for CF, enabling treatment to start right away rather than months or years later, when a child fails to thrive.

## Humbling science

Many CF experts say that, after 20 long, frustrating years, it's possible now, finally, to look patients in the eye and assure them that in a few years, treatment will be vastly improved. And patients are optimistic, too. “I can't imagine where we're going to be in another 25 years if it's not cured,” says Ryan Ress, a 24-year-old with CF. Ress was Drumm's infant next-door neighbor who inspired the geneticist, now at Case Western Reserve University in Cleveland, Ohio, to stick with CF, and the two remain in touch. Ress majored in biochemistry in college and spent a summer working in a CF lab next door to Drumm's. He's now studying to be a neonatal nurse practitioner. Ress is convinced that CF will be conquered, based on his reading of the disease literature and the belief that “there will be a reward” for CF researchers for their backbreaking years of work.

But with lessons learned the hard way, caution abounds, too: “We have miles to go before we sleep,” says Paul Quinton, a physiologist at the University of California, San Diego. Quinton is a rare bird. At 20, in college and thinking about his own mortality, he says, he began combing through textbooks in his campus library, hunting for an explanation for the abdominal troubles, chronic cough, and lung problems that had plagued him for years. In books he found an answer: CF. Soon after, Quinton abandoned his dream to become a poet and turned to understanding his disease. In 1983, he determined that chloride transport was the fundamental defect in CF, one of the biggest breakthroughs in the field.

Quinton learned via genetic testing that he harbors one severe mutation and one that's milder, a combination that may explain why he's survived as long as he has. Now 64, he admits that he was as optimistic as the next person when the CF gene was found, even declaring in an editorial in Nature that the chance to cure CF had become reality. These days, though, he sees questions everywhere. How, exactly, does normal CFTR function? How does the absence of CFTR lead to the thick mucus of CF? Will even today's most promising drugs work in more than a very narrow slice of patients?

The case of CF, agrees Amaral, is “a lesson in being humble in science.”

What does this mean for the flood of genes identified in the years since—both for single-gene diseases and more complex ailments? One shouldn't generalize from the CF story, says the irrepressibly optimistic Collins, former director of the National Human Genome Research Institute, because every disease is different. He knows of at least one—progeria, which causes accelerated aging—in which a gene he helped identify led to a drug within 5 years that's now being tested in nearly every child with the disease.

Collins's early competitor and later collaborator in the CF gene hunt, Tsui, treads more carefully. “Because of the excitement, some scientists, perhaps even disease funding agencies, … wanted to give people hope, or give themselves hope,” says Tsui, who in 2002 left Toronto to become vice-chancellor and president of the University of Hong Kong. “They were a little bit optimistic at predicting when a cure would be there. … [It] taught a lesson to other gene researchers.” Namely: don't spin prophecies, don't assume that the gene is the end of the story. Rather, it's just the end of the beginning, with a long road still ahead.

13. China

# Radio Astronomers Go for High Gain With Mammoth Telescope

1. Mara Hvistendahl*

Construction is about to commence on the world's biggest single-dish radio telescope, which will take aim at exotic beasts such as pulsars and dark galaxies.

DAWODANG, CHINA—Descending into the limestone valley where China has chosen to build its paramount telescope is a treacherous hike. So steep and vast is the depression that the few dozen villagers who live at the bottom rarely leave.

Scale is precisely what China is going for with the 500-meter Aperture Spherical Radio Telescope (FAST), a massive instrument that the government hopes will thrust China to the forefront of radio astronomy. This month, engineers from the Chinese Academy of Sciences' National Astronomical Observatories in Beijing will drill into this remote corner of Guizhou Province for a final round of geo-engineering studies before breaking ground later this year. When FAST sees first light in 2014, it will measure more than five football fields in diameter, making it the largest single-dish radio telescope in the world.

FAST is modeled on Arecibo, the 305-meter-diameter radio telescope cradled in a limestone karst valley in Puerto Rico—a sight so arresting that the dish is a popular setting for science-fiction movies. But with a collecting area twice that of Arecibo's, FAST will be even more striking. “It's going to be an extremely impressive project,” says Donald B. Campbell, an astronomer at Cornell University and associate director of the National Astronomy and Ionosphere Center, which operates Arecibo. “We're expecting them to do some good science with it.”

Since Arecibo was completed in 1963, radio astronomy has charted some important advances. Key among these is the development of radio interferometry, or antennas linked in a network to achieve high resolution and high frequency by covering a wide geographic area. Today's radio telescopes include both interferometric arrays and single dishes, which operate at a lower resolution but simplify data collection. “You can build a large number of small dishes, or you can build a small number of large dishes,” says Nan Rendong, FAST's chief scientist and engineer.

China's decision to opt for a single dish was partly strategic. “I looked around and thought, ‘We cannot compete in phased arrays,’” Nan says. Even so, astronomers say single-dish telescopes are valuable for studying everything from magnetic fields to pulsars. FAST's jump in size will mean a leap for astronomy in those areas.

FAST will boast twice the sensitivity of Arecibo, allowing astronomers to cut through cosmic dust to detect weaker signals from a greater distance. And although Arecibo can eavesdrop on a swath of sky ranging 20 degrees from the zenith, FAST will tune in signals from an arc twice as wide, a difference that will expand access to parts of the universe out of Arecibo's reach. “They will be able to sample a large portion of the Milky Way,” says James M. Cordes, a radio astronomer at Cornell who observes at Arecibo. That's important, he says, for surveying pulsars and using the rapidly spinning neutron stars to shed light on gravitational waves and general relativity in our galaxy.

Plans for China's behemoth dish date to the early 1990s, when astronomy in the country was opening up to the outside world. In 1993, astronomers from China joined colleagues from around the globe in Kyoto for an International Union of Radio Science meeting, at which they discussed plans for a Large Telescope, the precursor to today's international Square Kilometre Array (SKA). The next year, Nan and Peng Bo, both of National Astronomical Observatories, began surveying sites in western China.

The astronomers saw Arecibo as a compelling model, figuring that they could reproduce the dish with updated technology. It helped that China abounds in karst, much of it in underdeveloped areas with low radio interference. “FAST takes advantage of our geography,” says Shen Zhiqiang, an astronomer at the Shanghai Astronomical Observatory.

In the mid-1990s, at the Beijing observatory's request, the Chinese Academy of Sciences' Institute of Remote Sensing Applications surveyed karst formations in southwest China using QuickBird satellite photos. Nan and his colleagues looked for round, uniform sinkholes that were middle-aged in geological terms, meaning they weren't yet filled with debris from erosion. Smaller candidates were marked for China's SKA proposal—the Kilometer-square Area Radio Synthesis Telescope (KARST), which would have entailed a series of 200-meter reflectors spread out over 30 smaller depressions. A larger depression was reserved for FAST, described as an SKA prototype.

China's plan disintegrated in 2006, when SKA's steering committee opted for an array of smaller antennas, a choice that ensures better imaging. But preparations for FAST continued. Last year, China's National Development and Reform Commission authorized $97 million for the project. By that point, Nan and his colleagues had settled on Dawodang, a nearly perfectly circular valley ringed by karst peaks. Spaced between the peaks will be six towers strung with cables supporting FAST's focus. Curving down from the towers will be the dish itself. For searches that require sensitivity but not imaging, a single dish offers quicker processing speeds. “If you look back at the history of Nobel Prizes for physics, several have been awarded for work done using a single dish,” says Nan. For example, Russell Hulse and Joseph Taylor Jr. of Princeton University discovered binary pulsars using Arecibo in 1974, a landmark find that earned them the 1993 prize. FAST will be structurally different from its U.S. cousin, however. Arecibo's focus is suspended over its reflector by a 900-ton movable platform, a design that is both unwieldy and expensive at a larger scale. FAST's reflector will instead do the tweaking through 4600 adjustable parabolic plates, and the cabin containing its focus will be lighter. More critical to the science, FAST will have 19 feed horns, whereas Arecibo currently has just seven. That will allow FAST's surveys to be conducted with more sensitivity or, well, FASTer. That's a boon for observations, says Nan: “If you have a larger field of view, it's much easier to capture blinking transient phenomena” such as comets, meteors, and supernovae. A chief astronomical target for the new telescope is pulsars. Arecibo has excelled at pulsar hunting, and FAST's heightened sensitivity means it should find many more. Once the Chinese scope starts looking for pulsars, in just 230 days of operation it stands to increase the number of known pulsars nearly fourfold to about 7000, says Richard Manchester, a pulsar expert at Australia's Commonwealth Scientific and Industrial Research Organisation who served on a panel that evaluated FAST in 2006. Astronomers hope at least one of the new crop will be a pulsar orbiting a stellar-mass black hole, which would enable them to test general relativity and other theories with great precision. “That's one of the holy grails of astronomy,” Manchester says. Neither Arecibo nor FAST are in the right location to tune in to chatter from Sagittarius A*, the black hole believed to lie at our galaxy's center; SKA is better positioned for that task because it will be sited in the Southern Hemisphere, in Australia or South Africa. But FAST's high sensitivity will expand knowledge of distant galaxies that formed when the universe was young. Specifically, FAST will enable astronomers to detect the weak hydrogen line that is crucial to locating starless clusters of dark matter known as “dark galaxies.” “They are little beacons out there that could be used for studying the structure of the universe,” says Richard Strom, an astronomer at the Netherlands Institute for Radio Astronomy in Dwingeloo who has advised China on FAST. On a more fantastic quest, Nan hopes the telescope will advance the search for extra-terrestrial intelligence (SETI). Under Project Phoenix, the SETI Institute in Mountain View, California, has surveyed about 800 sunlike stars using Arecibo and other radio telescopes. Nan says FAST will increase that number to 5000. Uncovering signs of life in other solar systems is a “long shot,” says Manchester. “But it's something that FAST might be able to do.” For now, however, FAST's founders are training their sights on the rugged landscape of south-eastern Guizhou. “It's a huge construction job,” notes Arecibo's Campbell. In April, engineers visited Dawodang to survey the site and inspect locations for the telescope's towers. After this month's drilling is completed, workers will widen the canals leading out of the valley to ensure that monsoon rains don't turn FAST into the world's largest swimming pool. Among the remaining challenges is relocating the dozen families who have been eking out a subsistence living in the karst depression for as long as anyone can remember. The local government is building two-story houses for the villagers several kilometers away, but they're reluctant to move. “I haven't left [Dawodang] for 10 years,” says 74-year-old Huang Huamei. Once the farmers are resettled, FAST's construction team can bring in bulldozers and lay the colossal dish's foundations. The project is running a few months behind schedule. But when the telescope is complete, the country will have an instrument of epic proportions. Cue the film crews. • * Mara Hvistendahl is a writer in Shanghai. 14. Economic Recovery # ISO … 3.5 Million U.S. Jobs 1. Constance Holden Economists are shaking their heads at the Obama Administration's claim to be able to count the jobs that will flow from the stimulus package. Last week, the White House announced that the massive stimulus package approved this winter has created or saved more than 150,000 jobs in its first 100 days. The next 100 days will add another 600,000 jobs, predicted Jared Bernstein, economic adviser to Vice President Joe Biden, whose office is responsible for tracking the impact of the nation's$787 billion spending spree, known officially as the American Recovery and Reinvestment Act (ARRA).

In fact, no data on job creation yet exist. The numbers come from economic models. “Once you know the spend-out and the type of spending that you're engaged in, then you can derive an estimate of how many jobs you believe you created relative to what would have occurred in the job market were you not doing that spending,” Bernstein explained. “This is an absolute tried-and-true economic methodology.” In a May report on job creation, the President's Council of Economic Advisers (CEA) does not sound quite so confident, stating that “this is an exercise that should be done despite [a] likely large margin of error.”

Welcome to Economics 101, Washington-style. When he signed it into law in Denver, Colorado, on 17 February, President Barack Obama proclaimed that ARRA “will create or save three-and-a-half-million jobs over the next 2 years, including nearly 60,000 in Colorado.” Most economists take such predictions with more than a grain of salt. In normal times, they say, modeling can do a fair job at coming up with useful prognostications. But in times like these, as Princeton University economist Harvey Rosen puts it, “all bets are off.”

Phillip Swagel, former chief economist at the Treasury Department, says that CEA has tried so hard to be specific about job creation over time and in individual states that its numbers are “as close to meaningless as you can get.” What's more, says economist Andrew Reamer of the Brookings Institution in Washington, D.C., even if the numbers turn out to be right, ARRA contains no provisions for predicting the longer term contributions to the economy. “You can have a lot of jobs [in which] people are just chasing their tails,” Reamer observes.

There's also a political dimension to counting jobs, says Donald Marron, a member of CEA under President George W. Bush. Recipients of ARRA money, he says, “have a lot of incentive to make this look effective.” That could lead, he says, to reports of jobs being “saved” that in fact were never slated for the chopping block.

Observers give the White House credit for doing its darnedest to try to make sense of the confusion. Even before Obama took office, economists worked feverishly to calculate the job impact of a recovery plan, anticipating that the package would be in the neighborhood of $770 billion in tax cuts, “fiscal relief” to states, and direct government spending. In January, 10 days before the inauguration, Bernstein and Christina Romer, now chief of CEA, predicted that 3,675,000 jobs would have been created or retained by September 2010. That number came with some footnotes, however. It features three definitions of the word “job”—direct, indirect, and induced—offered in two flavors—created and preserved. Relying heavily on a quarterly macroeconomic model of the U.S. economy from the Federal Reserve Board and data from economy.com, a Web site run by Moody's Investors Service, Bernstein and Romer predicted that the stimulus would increase gross domestic product by 3.7%. Each percentage point corresponds roughly to a 1% increase in employment, or 1 million jobs. Tax cuts are responsible for about 1 million jobs, with the rest being generated by infusions of cash into numerous state and federal programs. Construction projects alone claim 18%, the largest single share. But Bernstein and Romer didn't stop there. They went on to make state-by-state and industry-by-industry employment estimates. The state analyses drew upon three sets of data: the working-age population of the state; the state's proportion of total U.S. employment; and employment in particular industries as of 2007. A state with 10% of the nation's aluminum manufacturing jobs was assumed to get 10% of the jobs created or retained in that industry as of the end of 2010. Bernstein's estimate of 150,000 new or salvaged jobs—the models can't predict which they are—was based on such assumptions. In their January report, Bernstein and Romer forecast an unemployment rate by the middle of 2009 of about 8% with the stimulus, and 8.5% without it. When the May unemployment rate, announced last week, turned out to be 9.4%, Bernstein didn't miss a beat. Even though the economy contracted more violently than expected in late 2008, he said, the expectation of 3.5 million jobs from the Recovery Act remains intact. That economic legerdemain highlights a point made by critics of the approach: With no baseline—there is no parallel universe with an unstimulated U.S. economy—there is no way to test the prediction. Rosen, who also served in Bush's CEA, says the Obama Administration can take credit for millions more jobs regardless of what happens to the economy. “I'm not claiming that the Bush Administration would do it any better,” adds Rosen. Rather, he says, following conventional models “makes more sense when things are proceeding normally.” ## Telling Uncle Sam How, in fact, do you know when a job is “sustained” that might otherwise have been lost? How can you distinguish a “created” job from one that is being filled by someone who already had one? And what's the definition of a job, anyway? In May, CEA said it anticipates that ARRA jobs will fall into three categories. The first is those financed directly from stimulus money. The second type is created “indirectly”: when a cement company, for example, gets business from a highway contractor receiving ARRA money. The third kind is “induced,” for example, a waitress hired to serve workers employed on a nearby ARRA-funded construction project, or a mall employee serving customers spending their tax rebates. CEA hasn't figured out a way to separate direct from indirect jobs, but it predicts that the two categories, combined, will account for about two-thirds of job creation or retention. Induced jobs make up the rest. For example, ARRA's$288 billion in tax cuts will lead to about a million jobs by CEA's calculations. But the council acknowledges that “there is no mechanism available [in ARRA] for collecting data on actual job creation” either from tax cuts, from $81 billion in payments to people hurt by the recession, or from the$144 billion in “fiscal relief ” to states.

The burden of reporting how many jobs have been created or preserved will fall upon those who receive the $271 billion in direct government spending. That mechanism, which includes stimulus money flowing to academic research projects, is also expected to generate the biggest bang for the buck in jobs, according to CEA, which estimates that a contract or grant worth$92,000 translates into one job for 1 year.

The rules for how to report job creation or preservation were due out this week, according to the Office of Management and Budget (OMB). Preliminary OMB guidelines say that quarterly reports should include “a narrative description of the employment impact,” including estimates of numbers of jobs created or retained, measured as full-time equivalent positions.

However it is defined, the task of counting jobs won't be easy. “They're not going to have a clue,” says a government economist who asked not to be quoted by name. In addition to the elusive nature of the jobs themselves, there's also the question of duration. Many jobs are like fireflies, flashing on one minute, off the next. “Fifteen percent of jobs last for less than a quarter [of the year],” says labor economist Julia Lane, director of a new program at the National Science Foundation called the Science of Science and Innovation Policy. That transient nature is at odds with CEA's definition of a job retained as “an existing position that would not have been continued were it not for ARRA funding.” Continued, but for how long?

Universities and state and federal agencies are scrambling to figure it all out, and their strategies are likely to vary by type of recipient. In academia, for example, there's the question of whether a graduate student counts as an employee or a student. Even if universities gear up to collect data that relates to job creation or retention, questions remain. “There's a lot of concern in the community about how we could, even with the best of guidance, ensure that we report comparable data,” says Tobin Smith of the Association of American Universities in Washington, D.C. “What kind of work can be counted toward a job retained? If one university counts one way and one another, then none of the data becomes comparable.”

Lane hopes to find a way out of the swamp, taking an approach that is intended to relieve scientists of most of the burden. It calls for coordinating administrative records that already exist within the offices of finance, human resources, and research at most universities. Creating an administrative tracking system would mean developing uniform definitions of what is a “created” and what is a “retained” job, she says. There would still be issues of privacy and confidentiality, feasibility, and cost. But, she says, such a system shouldn't be too costly because the data will already have been generated. Lane says a pilot project is planned at perhaps a half-dozen universities.

Prem Paul, vice chancellor for research and development at the University of Nebraska, Lincoln, thinks that such a system would be “extremely helpful.” Research universities are setting up Web sites and advertising positions to help administer the expected windfalls from ARRA, which includes $21.5 billion for R&D. Paul says Nebraska is applying for a total of$75 million in ARRA funds, and the university will have to hire “hundreds” of additional staff to do the reporting if all the applications are successful.

Although universities are required to report only on “direct” job creation, recipients of other types of funding, such as from the Department of Transportation, must track not only direct but also indirect jobs from billions of dollars in construction projects. The Federal Highway Administration alone will be doling out $26.7 billion to states and localities for highway construction and maintenance. The department plans to tell funding recipients to use expenditure data and apply rules of thumb supplied by the CEA such as the$92,000-per-job-year figure.

One thing that is already clear is that ARRA is having a salubrious effect on government job creation. Thousands are being hired to send out the checks and to handle reporting from ARRA recipients. Using job reports that are starting to pour in, CEA hopes that its next report in August will contain some real numbers. But theory—some would say guesswork—will still loom large.

15. Physics

# Is Quantum Mechanics Tried, True, Wildly Successful, and Wrong?

1. Tim Folger*

A skeptical physicist charges that his field has been wandering in a philosophical wilderness for 80 years. The good news: He thinks he knows the way out.

Antony Valentini has never been happy with quantum mechanics. Sure, it's the most powerful and accurate scientific theory ever devised. Yes, its bizarre predictions about the behavior of atoms and all other particles have been confirmed many times over with multi-decimal-place exactitude. True, technologies derived from quantum mechanics may account for 30% of the gross national product of the United States. So what's not to like?

Valentini, a theoretical physicist at Imperial College London (ICL) and the co-author of a forthcoming book on the early history of quantum mechanics, believes that shortly after the theory's birth some 80 years ago, a cadre of influential scientists led quantum physics down a philosophical blind alley. As a result of that wrong turn, Valentini says, the field wound up burdened with paradoxical dualities, inexplicable long-distance connections between particles, and a pragmatic “shut up and calculate” mentality that stifled attempts to probe what it all means. But there is an alternative, Valentini says: a long-abandoned “road not taken” that could get physics back on track. And unlike other proposed remedies to quantum weirdness, he adds, there's a possible experiment to test whether this one is right.

“There isn't a more insightful or knowledgeable critic in the whole field of quantum theory,” says Lee Smolin, a theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. Smolin, who researches a subfield known as quantum gravity, has long held that current quantum theory is incomplete at best.

In a book to be published later this year by Cambridge University Press, Valentini and co-author Guido Bacciagaluppi, a philosopher of physics at the University of Aberdeen in the United Kingdom, reassess a pivotal and contentious meeting at which 29 physics luminaries—including Louis de Broglie, Niels Bohr, Werner Heisenberg, Erwin Schrödinger, and Albert Einstein—butted brains over how to make sense of quantum theory.

The book, Quantum Theory at the Crossroads, includes the first English translation of the proceedings of the historic 1927 Solvay conference. The gathering was the fifth in an ongoing series of invitation-only conferences in Brussels, Belgium, launched in 1911 by the Belgian industrialist Ernest Solvay. At the meeting, blandly titled “Electrons and Photons,” attendees grappled with issues that were—and remain—among the most perplexing ever addressed by physicists. Quantum mechanics confounds commonsense notions of reality, and the physicists in Brussels disagreed sharply about the meaning of the theory they had created.

A classic experiment demonstrates the sheer strangeness of the new physics they were struggling to understand. Light—a stream of photons—shines through two parallel slits cut in a barrier and hits a strip of film beyond the slits. If the experiment is run with detectors near each slit so physicists can observe the passing light particles, the result is unsurprising: Every photon goes through either one slit or the other, just as particles should, leaving two distinct clusters of dots where the individual photons strike the film.

Remove the detectors, however, and something exceedingly strange happens: A pattern of alternating light and dark stripes appears on the film. The only explanation is that photons sometimes behave like waves. As light waves emerge from the two slits, bright lines form on the screen where wave crests overlap; dark lines, where a crest and trough cancel each other. As long as no detectors are present, the same pattern appears even if the photons hit the screen one by one. Over the decades, physicists have tried the experiment with photons, electrons, and other particles, always with the same bizarre results.

The experiment highlights two of the conundrums that dominated discussions at the 1927 Solvay conference: How can photons, electrons, and all other bits of matter and energy behave like waves one moment, particles the next? And how does one explain that the mere act of observation seems to affect physical reality—at least on the quantum level?

## Unreality rules

Bohr and Heisenberg answered such questions with an austere vision of the theory now called the Copenhagen interpretation. With no observer present, they said, any given particle exists here, there, and everywhere in between, dispersed like a wave. Introduce an observer to measure the wave, however, and the quantum wave “collapses” into a single particle. Before the measurement, the particle could be described only by an equation that specified the probability of finding it in one location rather than another. The act of measurement itself forces a particle to assume a single, definite position. The sharp boundary between an objective world “out there” and subjective observations blurs in this version of quantum theory.

“Bohr believed that it was meaningless to try to describe the quantum world because we have no direct experience of it,” says Valentini. “Bohr and Heisenberg thought that quantum mechanics showed we had reached the limits of human understanding. … Physics no longer told us how things are—it only told us how human beings perceive and measure things.”

Some conference participants, most notably Einstein, de Broglie, and Schrödinger, rejected Bohr's arguments. Physicists today remember Einstein as Bohr's chief antagonist. But their famed disputes over the validity of quantum theory must have taken place off the record, Valentini says; the published conference proceedings don't mention them at all.

The proceedings do, however, contain 24 pages of discussion of a rival interpretation by de Broglie. Unlike Bohr, who viewed the quantum wave equation describing a particle as a mathematical abstraction, de Broglie thought such waves were real—he called them pilot waves. In de Broglie's picture, particles never exist in more than one place at the same time. All the mysterious properties of quantum theory are explained by pilot waves guiding particles along their trajectories. In the two-slit experiment, for example, each particle passes through only one slit. The pilot wave, however, goes through both slits at once and influences where the particle strikes the screen. There is no inexplicable wave collapse triggered by observation. Instead, Valentini says, “the total pilot wave, for the particle and the detectors considered as a single system, evolves so as to yield an apparent collapse.”

Bohr, Heisenberg, and their supporters at the Solvay conference were unimpressed. The details of the particle trajectories were unobservable, and Bohr insisted that physicists shouldn't traffic in hidden, unmeasurable entities. “De Broglie wasn't happy with the Copenhagen interpretation,” says Valentini, “but he gave up trying to argue about it.”

Bohr and Heisenberg's vision of quantum theory prevailed; de Broglie's languished. David Bohm, a prominent American physicist, rediscovered de Broglie's work in the early 1950s and expanded on it. But Bohm's work, like de Broglie's, failed to attract much support, because it could not be distinguished experimentally from conventional quantum mechanics.

The past decade has seen renewed interest in understanding the foundations of quantum mechanics, and physicists have devised several competing interpretations of the theory (Science, 25 June 2004, p. 1896). Valentini has been in the thick of this quantum renaissance. In the early 1990s, as a graduate student studying with the late Dennis Sciama, a cosmologist who also mentored Stephen Hawking, he learned about the work of de Broglie and Bohm and became convinced that it had the potential to resolve all the mysterious paradoxes of quantum mechanics. He has spent most of his career almost single-handedly building on their work.

His single-mindedness has cost him. Although Valentini's colleagues acknowledge the originality and importance of his research, spadework on the foundations of quantum theory has not been a fast track to tenure. For years, he has survived from grant to grant in a succession of temporary positions; his current one at ICL ends this year.

“I used to do private teaching just to get by,” Valentini says. “Things have changed in recent years, but I'm still just living year by year. It is a field where there are these wide-open, in-your-face problems with interpretation that are staggeringly fundamental, with virtually nobody in the world really dedicating the bulk of their time and attention to working on them. So how do you expect there to be much progress?”

## Beyond the quantum?

In Valentini's physics, the “laws” of quantum mechanics are not really laws at all but accidents of cosmic history. Particles in the universe today conform to the supposed rules of quantum mechanics, Valentini suggests, because they settled into a sort of quantum equilibrium immediately after the big bang, in a process roughly analogous to the way a mixture of hot and cold gases gradually reaches a uniform temperature. Immediately after the big bang, particles could have existed in states not allowed by the normal rules of quantum mechanics but permitted in pilot-wave theory.

“Quantum physics is not fundamental; it's a theory of a particular equilibrium state and nothing more,” says Valentini. “To my mind, pilot-wave theory is crying out to us that quantum physics is a special case of a much wider physics, with many new possible phenomena that are just there waiting to be explored and tested experimentally.”

The place to look, Valentini says, is in the cosmic microwave background (CMB), the remnant radiation from the big bang that fills all of space. The radiation is almost perfectly uniform, with only slight variations in temperature. Theorists think those small temperature differences resulted from quantum fluctuations that were magnified as the universe expanded. In a paper Valentini has submitted to Physical Review D, he argues that if his pilot-wave theory is correct, some of those temperature variations will not have the distribution that standard quantum theory predicts. Deviations are more likely to survive at long wavelengths, he says. CMB measurements by the WMAP probe have revealed “intriguing” anomalies in precisely that domain, Valentini says, but pursuing them will take time and effort. “I need to do a lot more work to refine my predictions,” says Valentini. “Part of the problem is that I'm the only person working on it. It is a difficult thing.”

Confirmation of Valentini's idea would be one of the biggest advances in physics in decades. The Planck spacecraft, launched in May by the European Space Agency (Science, 1 May, p. 584), will take a closer look at CMB and could conceivably find evidence supporting Valentini's predictions.

“One of the most attractive features of Antony's proposals is that they're testable,” says David Wallace, a philosopher of physics at the University of Oxford in the United Kingdom. “If tomorrow there is some experiment that Antony's theory gets right and quantum mechanics gets wrong, then end of story.”

Valentini knows he faces steep odds. “Maybe in 200 years people will look back and say the time wasn't right to reexamine the foundations of quantum mechanics,” he says. “Or it might be that they'll say, ‘My God, it opened up a whole new world.’ We can't tell. One thing is certain: We won't find out if we don't try.”

• * Tim Folger is a contributing editor at Discover.