News this Week

Science  22 May 2009:
Vol. 324, Issue 5930, pp. 996

You are currently viewing the .

View Full Text

Log in to view the full text

Log in through your institution

Log in through your institution

  1. Swine Flu Outbreak

    Past Pandemics Provide Mixed Clues to H1N1's Next Moves

    1. Jon Cohen

    Ask influenza researchers where they think the swine flu outbreak is heading, and the stock reply is a flat-out refusal to speculate. But ask them whether their research into other influenza viruses and the course of past pandemics hold any clues, and they have insights aplenty. And many lessons from the past, they say, upend cherished dogma.

    Like hurricane watchers, public health officials must try to predict the velocity and course of the influenza A virus causing this outbreak, a novel H1N1, because they have to make complicated and expensive decisions about how aggressively they should pursue vaccines, antivirals, and mitigation strategies like school closings. Given the fuzziness of the data about this new virus's behavior, researchers are looking to the past for clues about the seasonality and geography of pandemic flu, the relationship between the new viruses and existing ones, and the behavior of this new H1N1's parent viruses in swine.


    Many would-be influenza seers expect that this H1N1 will soon leave the Northern Hemisphere for cooler, southern climes, infect millions there, become stronger, and then return north as temperatures drop, causing a second wave of infections in the north that are more devastating than the first. “It happens, but everybody is hedging their bets with this virus,” says epidemiologist and historian Donald Olson, research director for the International Society for Disease Surveillance. Although abundant evidence confirms the seasonality of flu, a careful study of a different human influenza virus, H3N2, published in Science last year (18 April 2008, p. 340), showed that it typically starts in east and Southeast Asia, and then indeed moves from north to south—but rarely returns to the north. Virologist Robert Webster of St. Jude Children's Research Hospital in Memphis, Tennessee, adds that the new H1N1 has already shown that it doesn't give a whit about the ending of winter in the Northern Hemisphere. “It's too late in the season to be in humans at the moment, but it is,” says Webster. “You can't lay down rules for flu viruses—they'll break them every time. It's almost as though the virus reads them and says, ‘I'll do the damn opposite.’”

    Researchers have relatively good data about four pandemics that have occurred since 1890, and they all followed different seasonal patterns (see graphic). The second wave of the first pandemic walloped London between April and June 1891. The infamous 1918 Spanish flu had its first wave in Copenhagen in July. The first wave of the 1957 Asian flu hit the United States in September. “Pandemic influenza viruses do not respect normal seasonality,” concludes epidemiologist Lone Simonsen of George Washington University in Washington, D.C., who has closely studied each historical pandemic. She questions whether the new H1N1 will even leave the Northern Hemisphere in the next few weeks. “There could be a summer wave,” she says. “I'd make a case for not putting our guards down now.”

    Epidemiologist Arnold Monto of the University of Michigan School of Public Health in Ann Arbor says when the 1957 flu surfaced in the United States, many researchers suspected the same north-south-north pattern would come into play. “People at that time were worried that they might be seeing a 1918,” says Monto, who notes that air travel was not prevalent then. “So they sent people to the Southern Hemisphere … to study the outbreak as it took off, and in most countries, it didn't.”

    If the nightmare scenario of 1918 returns, the novel H1N1 could become more destructive during a second wave. In that pandemic, Simonsen and others suspect that the virus went through genetic drift, a process in which the human immune response leads the virus to mutate. “We would expect to see quite rapid drift of this virus once it generates a large-scale epidemic in one hemisphere,” says Neil Ferguson, a mathematical epidemiologist at Imperial College London.

    Virologist Peter Palese of Mount Sinai School of Medicine in New York City says that when gauging the threat posed by the new H1N1, scientists should keep in mind that it does not have a version of the protein called PB1-F2 that research has linked to the virulence of other pandemic strains. “That's one reason I don't believe this virus will cause a lot of problems,” says Palese. The new H1N1 could also die out entirely. “There's a 50–50 chance it will continue to circulate,” he predicts.

    One worry is that the new virus could pick up genes from other influenzas, a process called reassortment, and develop more resistance to drugs or other dangerous features. Although there is no index to gauge a strain's likelihood to reassort, virologist Richard Webby, who works with Webster at St. Jude, says the well-studied parental viruses of this H1N1 suggest that it comes from a promiscuous family. The new virus is a so-called triple reassortant—a mix of swine, human, and avian genes—that researchers first found in North American pigs in 1998. Since then, Webby says researchers have discovered three different instances of human genes reassorting with these pig viruses.

    But the good news, says Webby, is that no previously detected pig triple reassortants have caused severe disease in swine. The same is true of the other presumed parent of the novel H1N1, a Eurasian swine virus. “We know enough about the parents of this particular virus, which have been evolving in a mammalian host for extended periods, that I'm far from convinced that we'll see any exacerbation of disease,” says Webby. “I'm not a big believer that this virus will come back with a punch in a second wave.”

    Yet at the end of the day, making predictions about this new H1N1's next move is a mug's game. “There's nothing more predictable about flu than its unpredictability,” cautions epidemiologist Monto. “Who would have thunk something would have come out of North America and taken off first in Mexico and then gone to the United States and Canada?”

  2. China

    Appearances Can Deceive, Even With Standard Reagents

    1. Hao Xin*

    Culturing immortalized human cell lines for microRNA studies had been a routine procedure in Xi Jianzhong's lab at Peking University. Then last June, something went wrong. Time after time, black spots appeared in the flasks and the cells died within a week. Xi, a biomedical engineer, spent the rest of 2008 trying to figure out why his team's experiments were failing. He finally got a tip that a cell growth medium his lab was using—Dulbecco's Modified Eagle Medium (DMEM)—might be bogus. “DMEM is so basic that we never suspected it could have problems,” says Xi. Sure enough, after obtaining a fresh batch from a well-known distributor, the cell lines grew without a hitch.

    China is infamous for cheap knockoffs of brand-name consumer goods. Now many scientists are discovering to their dismay that a cottage industry of faux biochemical reagents has sprung up to take advantage of China's hefty increases in R&D funding. Scientists who until recently worked overseas are especially vulnerable because they may not know which dealers to trust, Xi says. “More and more researchers are returning to China. We don't want them to waste time and money like we did,” says Huang Yanyi, who like Xi is a returnee in Peking University's biomedical engineering department.

    After Xi learned that many colleagues had also been victimized but kept quiet about their experiences, he contacted, a Web site for China's scientific community. In an online survey conducted with the biweekly magazine Science News, more than half of the nearly 500 respondents reported run-ins with fake reagents, according to results posted on on 29 April.

    Spitting image.

    Authentic (left) and fake (right) packages of Dulbecco's Modified Eagle Medium.


    Exposing scams may not prevent them. Currently, no Chinese agency regulates research reagents, except those used for medical tests. A database kept by Lab-on-web ( lists more than 500 reagent dealers in China; many are small operations claiming to be distributors of foreign products. The Web site has also published the names of unscrupulous dealers.

    Legitimate companies say there is little they can do about the bad apples. “It is a widespread concern throughout the life science and other industries in China,” says Johnson Ho, president for Greater China at Life Technologies, based in Foster City, California. Invitrogen, a division of Life Technologies, discovered that some of its products had been counterfeited. It has created a list of authorized distributors ( Ho suggests customers buy only from distributors on the list.

    Earlier this year, Xi confronted the Beijing-based distributor who sold his lab the false DMEM. The dealer challenged Xi to prove his case. Xi called Invitrogen's office in Shanghai, which on 20 March dispatched representative Ju Jun to Xi's lab to examine the DMEM. At first glance, the packages looked genuine, but a search in Invitrogen's database by lot number, which anyone can do on the company's Web site, revealed that one lot number did not exist, and a second had an expiration date that was different from that stated on the fake DMEM's package.

    Counterfeits are not limited to Invitrogen products. In recent years, others have reported fake enzyme-linked immunosorbent assay kits and ovalbumin products. More than time or money may be at stake, says Huang. If Chinese researchers were to publish spurious results because of fake reagents, he says, “we could lose our credibility as scientists.”

    • * With reporting by Xu Zhiguo of Science News in Beijing.

  3. Planetary Science

    Mars Rover Trapped in Sand, But What Can End a Mission?

    1. Richard A. Kerr

    Times are tough for the Spirit rover. Last week, in its 5th year of exploration, Spirit became mired in a dry martian version of quicksand. Mission engineers are studying how it might extricate itself, but the crisis raises a perennial question: When, if ever, should a mission in its declining years be put out of its misery? When does the diminishing science output no longer justify the millions of dollars it costs?

    “NASA always has a hard time cutting off any operating mission,” says planetary geologist Harry McSween of the University of Tennessee, Knoxville, who is on the Spirit science team. “If it turns out that Spirit can't move anymore, perhaps that decision will be easier.”

    No one has yet suggested ending Spirit's mission, but a permanently immobilized rover would certainly force the issue. NASA already conducts annual evaluations, which balance the cost of operating Spirit and its sister rover, Opportunity—currently $20 million per year—against the potential science payoff of continued operation. The two rovers have already run up initial costs of $400 million each, so their relatively modest annual costs have presented no obstacle to passing their annual reviews, despite the inevitable decline in major discoveries with time.

    “The new findings [from the rovers] are dwindling to a trickle,” says Michael Carr of the U.S. Geological Survey in Menlo Park, California, a nearly 40-year veteran of Mars exploration. That's partly because the rovers have thoroughly explored their immediate surroundings. Spirit is now nearly immobilized at what is likely an ancient steam vent, bogged down in deposits from the volcanic outlet—some of the very stuff that was one of its major discoveries.

    On the other side of the planet, Opportunity is 3 kilometers into a 16-kilometer trek away from Victoria crater, which it explored for 2 years. On page 1058, Opportunity science team members report that the salty sediments exposed in Victoria's walls looked just like those in the two smaller craters Opportunity has also explored. At its present pace, Opportunity will take two more years to cross flat, familiar terrain before reaching its next target, the even larger Endeavour crater.

    The discoveries also come more slowly because the rovers are getting long in the tooth. They are in surprisingly good shape for being years past their 90-day “warranties,” but Spirit is dragging one of its six wheels, which has become inoperable. That is making uphill travel difficult or impossible. Opportunity has a wheel showing early signs of the same problem. Among other signs of advanced age: The two bits of radioactive cobalt-57 vital to each rover's Mössbauer mineral analyzer are so depleted that it “takes weeks to do what we were able to do in hours,” says McSween.

    Near, yet far.

    The Mars Spirit rover (inset) has bogged down in fluffy soil until its wheels are nearly buried (left). It had been on its way to another likely volcanic vent 200 meters away (right).


    Long-running NASA missions like these rovers most often end in one of two ways. Controllers can lose them to a technical problem, perhaps through their own error. That often happens late in a mission when cost saving has shrunk the mission's operations staff, says Carr. He recalls how in the early 1980s, the skeleton crew running the two Viking landers on Mars ended up losing a lander after misorienting its communications antenna.

    Or missions can run out of energy. After the Galileo orbiter ran low on thruster fuel in 2003, controllers flew it into Jupiter, making valuable observations in close-in regions. The two Voyager spacecraft, on the other hand, are on an “interstellar” mission beyond Pluto after their 1977 launches. Their waning radiogenic power sources still allow them to return a dribble of observations from the edge of the solar system, where no spacecraft has gone before.

    Spirit and Opportunity rover team members half-expected that a predictable dearth of energy would end their mission, as martian dust accumulated on the rovers' solar panels and blocked sunlight. But wind gusts came to the rescue. Just a few weeks ago, several gusts—presumably swirling martian dust devils—cleaned Spirit's panels and returned its power to two-thirds its level on landing.

    The rovers enjoy considerable political support in Congress, and the public finds the intrepid explorers adorable. But to Mars researchers, it's the new science the rovers might return that justifies their operating cost. “There's more that can be done,” says John Mustard of Brown University, chair of NASA's Mars Exploration Program Analysis Group. “You don't turn off an operating rover. You can't predict what's around the next corner. You're moving, you're doing science, you go for it.”

    “Moving” will be key. “It's very easy to make a case if you get to a new place,” says Torrence Johnson of NASA's Jet Propulsion Laboratory in Pasadena, California. Even if Spirit extricates itself, sand traps in its path ahead might still confine it to an already-explored patch of Mars. And if Opportunity's misbehaving wheel finally breaks, that rover could wind up hobbling across a relatively boring plain far from Endeavour. Either way, says rover science principal investigator Steven Squyres of Cornell University, mission scientists and engineers would “have to look case by case with NASA headquarters” to balance the newly limited science potential against the inevitable cost. Then someone at NASA—probably the administrator—would decide whether to pull the plug.

  4. Newsmaker Interview

    NASA Asks Augustine to Point the Way

    1. Jeffrey Mervis

    This month, Norman Augustine, former CEO of Lockheed Martin, agreed to lead a 90-day review of NASA's human space flight program. The 73-year-old aeronautics engineer has a sterling reputation for providing impartial and useful advice on science policy: In 2005, he chaired a similarly rapid examination of the federal research investment that produced the National Academies'wildly influential Rising Above the Gathering Storm (RAGS) report, and in 1990, he directed a review of the U.S. space program.

    This time around, presidential science adviser John Holdren and acting NASA Administrator Christopher Scolese want Augustine to review the space “vision” laid out by President George W. Bush in January 2004. NASA has translated that vision into building a new rocket and space capsule that would replace the shuttle and service the international space station. The next step would be returning astronauts to the moon and, eventually, visiting Mars. Augustine says the panel will also be looking at “balancing” human exploration and science.

    Augustine emphasized that he didn't want to speak for the commission, whose members are expected to be named shortly. But he did share his thoughts on a number of issues that the panel is likely to tackle.

    Q:Do you expect this commission to break new ground?

    N.A.:Sometimes it's a matter of putting the pieces together. When the RAGS report recommended increased support for basic research and fixing K–12 education, we weren't entering new territory. And I suspect the same will be true for this commission. It's not likely that we'll be finding a new planet. But sometimes restating the obvious is helpful.


    Q:How much will you be constrained by NASA's tight budget?

    N.A.:We've been asked to recommend what makes sense within that [budget] profile. But we will not be shy about saying that NASA doesn't have enough resources to do the job right, if that's what we believe is the case.

    Q:You told appropriators in the House of Representatives recently that you think NASA's budget should be on the same doubling path as the National Science Foundation and the Department of Energy's science programs.

    N.A.:It's my personal view that any agency performing basic research should be on that path, and that includes NASA.

    Q:Will the panel also review NASA's space science program?

    N.A.:The focus of our effort is on the human space program. But we will also be looking at the science programs that are associated with human flight. That includes aspects that might better be done without human involvement. … For example, we might look at the idea of a precursor mission, with a robotic probe, before you get humans involved. That would tell you if it even makes sense to go where you want to go.

    Q:Would that be a new approach for NASA?

    N.A.:Yes, I don't know that they have done that in the past. And I'm not saying that we would [suggest it]. But I think we'd at least want to look at that possibility. The previous [1990] study made the point that there are certain things that humans can do that robotics cannot, and vice versa. And I think it's important to keep that in mind as we go forward.

    Q:What's your goal for the commission?

    N.A.:Hopefully, we'll be able to recommend a way forward, but in a fiscally responsible manner.


    From Science's Online Daily News Site

    Surviving Chernobyl. You might expect the scene of the world's worst nuclear disaster to be a barren wasteland. But trees, bushes, and vines overtake abandoned streets surrounding the Chernobyl nuclear power facility in Ukraine. Now, researchers say they've discovered changes in the proteins of soybeans grown near Chernobyl that could explain how plants survive despite chronic radiation exposure. The findings, reported in the Journal of Proteome Research, could one day help researchers engineer radiation-resistant crops.

    Designer Antibodies and AIDS. A new antiviral strategy powerfully protects monkeys from SIV, the simian cousin of HIV. The approach, reported in Nature Medicine, combines elements of vaccines and gene therapy. Experts say the development could eventually lead to a vaccinelike weapon against AIDS—a goal that has thus far proved elusive.


    Putting Photons to Work. Researchers have built a nanoscale device that vibrates when struck by incoming laser light. The contraption, reported in Nature, is sensitive to the energy of a single photon. The advance could speed the development of new optical communications systems; it could also help scientists probe some of the fundamental properties of matter with greater precision.

    Sparrows Change Their Tune. The year was 1970. Simon and Garfunkel topped the charts, floppy disks were brand-new, and California white-crowned sparrows sang fast, machine-gun trills. Just a few decades later, the sparrows sing noticeably slower songs, and a new study reveals the reason. The birds' habitat has gotten scrubbier, and their melodies have evolved to better penetrate the thickets, researchers report in The American Naturalist.

    Read the full postings, comments, and more on

  6. Biotechnology

    Lawsuit Challenges Legal Basis For Patenting Human Genes

    1. Eliot Marshall

    Patents have been awarded on human genes for decades, but until last week, no one had directly challenged the underlying idea that genes can be owned in a U.S. court. Now, a challenge has begun.

    With donated legal help, a group of patients, doctors, and research professionals sued in New York City on 12 May to invalidate perhaps the most famous gene patents: those on BRCA1 and BRCA2, genes associated with risk for breast and ovarian cancer. The main argument in last week's legal filing in Manhattan's federal court is that BRCA1 and -2 are “products of nature” and should never have been patented. The suit names 12 defendants, including the U.S. Patent and Trademark Office (PTO), for allowing the patents, and Myriad Genetics Inc. of Salt Lake City for running a patent-protected gene testing monopoly.

    Test case.

    ACLU claims that gene patents were used to suppress free speech.


    Myriad and others who hold BRCA patents—including the U.S. government—had not filed a response with the court at press time. Myriad spokesperson Richard Marsh said the company is still evaluating the case but added that a Supreme Court ruling in 1980 (Diamond v. Chakrabarty) upheld the patenting of a genetically engineered organism. “In light of the issuance of thousands of gene patents over 30 years, … we believe our patents are valid and will be upheld by the courts.”

    The legal muscle in the complaint against BRCA patents is provided by two groups in New York City, the American Civil Liberties Union (ACLU) and the Public Patent Foundation, a smaller advocacy group associated with the Benjamin N. Cardozo School of Law. According to an ACLU spokesperson, the civil rights champion has never before taken the lead in a science-laden case like this. The complaint goes beyond the “natural products” challenge and makes a second, more incendiary charge. It claims that Myriad violated defendants' freedom of speech, which is guaranteed by the U.S. Constitution, by using its monopoly to impede rival research, restrict clinical practice, and deny people access to medical information.

    For example, the complaint says, Myriad issued a “cease and desist” letter to genetics researcher Haig Kazazian Jr. at the University of Pennsylvania—one of 20 plaintiffs in this case—ordering him in 1998 to stop BRCA screening in his clinic without Myriad's permission. Myriad issued eight other such orders, the complaint says, including one to Yale University, which subsequently stopped BRCA testing. Several patients who joined the suit also claim that they were blocked from getting independent, second medical opinions because Myriad did not allow other labs to conduct or interpret BRCA tests.

    Although these charges may stir emotions, says Q. Todd Dickinson, former head of PTO and current director of the American Intellectual Property Law Association, judges are likely to be more interested in the primary legal issue: whether PTO made a fundamental mistake in granting gene patents. That argument may have some legal traction, several patent experts said, as U.S. courts have been moving to restrict the scope of patents.


    Myriad Genetics's monopoly on BRCA gene testing is under fire.


    Myriad, which won a broad U.S. patent on BRCA1 in 1997 and has since acquired and licensed other BRCA patents, has faced similar challenges in Europe, where it has been embroiled in continuous legal skirmishes. There, too, Myriad was accused of interfering with BRCA research and trying to impede clinical work. In particular, Myriad was faulted by researchers for being slow to acknowledge the importance of European data showing that some deletion mutations in BRCA genes could increase risk, says Michael Watson, executive director of the American College of Medical Genetics, which has opposed gene patenting for a decade and is a plaintiff in the ACLU case. These mutations were not part of the company's original test procedure, Watson notes, and the firm later added them (Science, 31 March 2006, p. 1847). The European Patent Office, meanwhile, has sharply narrowed Myriad's patent claims (Science, 24 June 2005, p. 1851).

    Hans Sauer, associate general counsel for the Biotechnology Industry Organization in Washington, D.C., says that gene patents are needed to secure investments in risky new projects, particularly for work on new therapies. In his view, there's essentially “no evidence” that patents have impeded research or medicine. Nonetheless, the ACLU suit is an interesting “test case,” he says, because it raises important questions about how to pay for the development of genetic medicine.

  7. Data Sharing

    Group Calls for Rapid Release of More Genomics Data

    1. Elizabeth Pennisi

    In 1996, at a meeting in Bermuda, researchers participating in the Human Genome Project opened the floodgates of DNA data by agreeing to release sequence information daily into a public database. “It was a game-changing situation,” recalls Eric Green of the National Human Genome Research Institute in Bethesda, Maryland. Until then, sequence data generally became available only upon publication. In 2003, the genome-sequencing community reiterated this pledge at a follow-up meeting in Fort Lauderdale, Florida, and came up with guidelines on how prepublication data should be used.

    Now pressure is mounting to extend the Bermuda Principles to a broad range of publicly funded projects that go beyond sequencing. They include whole-genome association studies, microarray surveys, epigenomics scans, protein structures, large-scale screening of small molecules for biological activity, and functional genomics data, only some of which are now covered by prepublication data-release policies. “Life in genomics has gotten far more complicated than what we had to deal with in Bermuda and Fort Lauderdale,” says Green. Last week, at the International Data Release Workshop held in Toronto, about 100 researchers, ethicists, and funding agency representatives began to hammer out guidelines for such efforts.


    Co-chairs Ewan Birney (left) and Tom Hudson (right) are choosing their words carefully to convey their meeting's consensus about data-release policies.


    In 1996, researchers were grappling with DNA sequence from just a few centers and with issues such as how to make data available quickly while preserving some rights to publish first on a whole data set. Now, 13 years later, data sets are proliferating and have become more diverse. Today, the point at which data should be released is less clear. For example, next-generation sequencing churns out huge quantities of data quickly, but the sequence needs some processing and has more errors than the raw sequence generated by the Human Genome Project.

    And researchers want information about the traits and diseases of the people whose DNA has been sequenced. “The infusion of human subjects research is adding great complexity,” says Green. Because of patient-protection concerns, these data are not in open databases; how to regulate access to the information is still being worked out.

    Five years ago, high-throughput genome centers did the lion's share of the sequencing as part of so-called community resource projects. Whole genomes were deemed community resources and released prepublication in one form or another. But in the next 5 years, faster, cheaper sequencing technologies should allow individual labs to tackle large genomes. How willing or able will those labs be to release data before publication? These smaller groups “don't have the expertise to develop the tools or set up their data in a way that others can use it,” says meeting co-organizer Tom Hudson of the Ontario Institute for Cancer Research in Toronto, Canada. And some may not want to for fear of being scooped.

    At the meeting, participants debated how to ensure that researchers who release data early get credit for the work and a chance to publish their analyses first. “That tension was a very old tension,” says Ewan Birney of the European Bioinformatics Institute in Hinxton, U.K., a co-organizer of the meeting. There was general agreement that the community needs to come up with a standardized way of citing prepublication databases so credit goes to the data producers. Participants also called for granting agencies and journals to encourage prepublication release. “It's the responsibility of the users and the funding agencies to try to make that happen,” says Hudson.

    The proliferation of controlled-access databases, many of which contain human genotype and trait information, poses more difficult problems. At issue is how to permit access to the data while protecting privacy—a task complicated by the fact that some databases contain information from multiple countries that vary in their patient-protection rules. “We need to do this in a context that doesn't make progress impossible,” says David Haussler of the University of California, Santa Cruz. He advocates one centralized authorization board. Hudson agrees. Eventually, “every scientist in the world will need passwords,” he says. “The question is whether they will need to have hundreds or one.”

    In the next several months, a final report—and, it is hoped, a publication on the topic—should help spell out how to extend prepublication data release beyond the sequencing community and further the discussion on controlled-access databases. As Tim Hubbard of the Wellcome Trust Sanger Institute sums up, “What came out was the need for clarity” about the proper etiquette for releasing, using, and accessing these data.

  8. ScienceInsider

    From the Science Policy Blog

    ScienceInsider continues to provide up-to-the-minute coverage of the H1N1 outbreak as public health officials and scientists around the world take steps to stop the virus. As of 18 May, WHO reported 8480 confirmed cases in 39 countries, with the number of H1N1 cases in Japan rising from four to 129 in 2 days.

    Austria's chancellor has overturned a decision by his science minister and decided that the country will remain a member of CERN, the European particle physics laboratory near Geneva.

    The National Science Foundation has decided to begin building a research ship, a solar telescope, and a network of ocean observatories with an investment of $400 million from the agency's $3 billion pot of stimulus money. Officials hope the allocation of $148 million will be enough to build the ice-enabled Alaska Region Research Vessel. Another $106 million will kick off the Ocean Observatories Initiative, and $146 million will build 60% of the Advanced Technology Solar Telescope in Hawaii.

    A Japanese plan to build the world's fastest supercomputer hit a roadblock last week when NEC and Hitachi announced they are withdrawing from the $1 billion project for fiscal reasons. Their departure may require officials to reconfigure the computer, which is supposed to be completed by 2012.

    The U.S. Transportation Security Administration is planning to institute new security requirements for the shipping of pathogens. The move could lead some courier companies to stop accepting shipments of pathogen samples for delivery. That step, in turn, could hurt collaborations between research labs and impede responses to public health emergencies. The proposed measures include requiring packages to be tracked at all times and mandating background checks for all employees of the courier company who might have access to the packages.

    For updates and other news, visit

  9. Public Health

    Scourge of Tobacco and Trans Fats Chosen for CDC

    1. Robert Koenig

    As a young epidemic intelligence officer assigned to New York City by the Centers for Disease Control and Prevention (CDC) nearly 2 decades ago, Thomas R. Frieden documented a serious outbreak of drug-resistant tuberculosis. Within 2 years, he was directing the city's successful efforts to contain the fast-growing outbreak, developing a program that became a model for TB control.

    Now Frieden, who has been New York City's health commissioner since 2002, is returning to CDC as its director. In a sense, he will be both an insider and an outsider, for despite his dozen years as a CDC employee working in New York City and later in India on TB control, he was never assigned to the agency's sprawling headquarters in Atlanta, Georgia.

    “He knows the CDC well, but he also has a valuable outside perspective from the front lines of public health,” says former CDC deputy director David Fleming, now public health director of King County, Washington. Adds former CDC chief Jeffrey P. Koplan, now vice president for global health at the Woodruff Health Sciences Center at Emory University in Atlanta: “He's innovative and smart, with the breadth and depth of experience in public health that's needed to lead the CDC.”

    That won't be easy. The nation's premier public health agency, with 15,000 employees and a $9 billion annual budget, was racked by allegations of political influence during President George W. Bush's Administration and divided by a reorganization advocated by former director Julie Gerberding, who left in January (Science, 16 January, p. 324). The acting director, Richard Besser, who has been widely praised for his response to the H1N1 influenza epidemic, has tried to improve morale by asking for a reassessment of some controversial changes made in recent years.

    Frieden's first weeks after he starts at CDC in early June are likely to be dominated by decisions related to the ongoing influenza epidemic, but he will also need to restore confidence in the agency. CDC staffers got a sense of his leadership style when Frieden, shortly after President Barack Obama appointed him, sent an all-agency e-mail saying that he will be “renewing and strengthening” the agency's commitment to science in public health. “Rigorous surveillance and epidemiology are not only our most powerful tools, they also are our ethos and the foundation of our authority,” Frieden wrote.

    Colleagues say Frieden, who has an M.D. from Columbia University's medical school and an MPH from the university's School of Public Health, has always focused on getting the science right. “He analyzes scientific data, applies it, and makes changes to improve public health,” says Roy Gulick, chief of the infectious disease division at Cornell University's Weill Medical College in New York City, who was a medical resident with Frieden. Gerald Keusch, Boston University's associate provost for global health, says Frieden “has great credibility among the science and public health communities for being thoughtful but tough when public health is at risk. He can make hard decisions quickly, act on them, and defend them.”

    Hard decisions.

    Thomas Frieden's aggressive public health moves were backed by New York City Mayor Michael Bloomberg (left).


    Tackling difficult public health challenges has been a hallmark of Frieden's 7 years as commissioner of New York City's Department of Health and Mental Hygiene, which has more than 6000 employees and a $1.7 billion annual budget. He waged a successful campaign to ban smoking in the city's restaurants and bars, required dining places to stop using trans fats and to post menu calorie counts, demanded that HIV tests be routine in medical exams, ordered electronic record keeping of diabetes blood-sugar tests, and advocated limiting salt content and imposing a tax on sugary drinks.

    “His work on tobacco control and his efforts to address the obesity epidemic have made New York a model” for other cities, says William Schaffner, who chairs the preventive medicine department at Vanderbilt University School of Medicine in Nashville, Tennessee. “He'll be an innovator—not a caretaker—at the CDC.” Frieden's activist style has, however, raised some concerns among those who fear he has an agenda. Scott Gottlieb, a physician and former U.S. Food and Drug Administration (FDA) deputy commissioner who is a resident fellow at the American Enterprise Institute for Public Policy Research in Washington, D.C., says Frieden is qualified but tends to be political in his approach. “To the extent that Frieden is, in my view, a political leader with a very clear policy agenda,” says Gottlieb, “his leadership is better directed out of HHS [Health and Human Services] proper” rather than out of CDC.

    Other concerns relate to whether some directives, such as the electronic reporting of diabetic patients' A1C blood-sugar test results, might intrude on privacy. David Rothman, president of the Institute on Medicine as a Profession at Columbia University, contends that Frieden's “confidence and certainty are so overwhelming that some worry that he's not concerned enough about the countervailing issues of personal liberty.” Frieden has argued that a blood-test registry is essential for accurate diabetes surveillance and that his department has vowed to protect confidential data.

    Frieden's success at CDC will depend partly on his political skills, says Arthur Reingold, head of epidemiology at the University of California, Berkeley. “The CDC director has to be an adept politician to deal with the 50 state health departments and myriad local health offices that play an important role in public health.”

    Frieden will also need to coordinate efforts, in areas such as investigating food-borne outbreaks and developing influenza vaccines, with FDA. CDC's relations with FDA have at times been rocky, but Frieden knows both the new FDA chief, former New York City health commissioner Margaret Hamburg, and her deputy, former Baltimore City health director Joshua Sharfstein.

    “Having a CDC head and FDA head who have a close prior working relationship is going to be an advantage to both agencies,” says Gottlieb. “This is an opportunity to reset that relationship.”

  10. Scientific Publishing

    Plagiarism Sleuths

    1. Jennifer Couzin-Frankel and
    2. Jackie Grom

    A Texas group is trolling through publications worldwide hunting for signs of duplicated material. The thousands of articles they've flagged online raise questions about standards in publishing—and about the group's own tactics.

    Carbon copy.

    Déjà vu detected this pair of suspect papers by different groups of authors; the one highlighted in yellow was published later and recently retracted.


    Harold “Skip” Garner never intended to become an enforcer. The affable computational biologist set out 7 years ago with a modest enough goal: to access the scientific literature more efficiently. With colleagues, he crafted a computer program called eTBLAST that could detect similarities in published abstracts, making it relatively easy to sort through the 19 million papers in a database like MEDLINE and pick out those in a narrow slice of science.

    But his group at the University of Texas (UT) Southwestern Medical Center in Dallas quickly realized that eTBLAST had another, tantalizing application. “We could do stuff like find plagiarisms,” says Garner. That held definite appeal—but first, Garner wanted to sharpen the program's accuracy. Two years ago, with support from the Office of Research Integrity and the National Institutes of Health, he launched Déjà vu, an online database that bills itself as “a study of scientific publication ethics.” It now lists 74,790 pairs of papers drawn from MEDLINE that eTBLAST has found with striking similarities in language or content. The authors include everyone from Nobel Prize winners to scientists toiling in obscure institutions in every corner of the world. When Science conducted random searches of illustrious names, between onethird and one-half showed up in Déjà vu as potential duplicators of their own or others' work. Garner and his crew have built a powerful tool for uncovering repetitious papers—and for raising authors' hackles.

    Over the past year or so, Déjà vu has rapidly gained prominence. It has prompted discussions with journal editors and at least 48 retractions of suspicious papers. In March, a rheumatologist resigned from Harvard Medical School after Déjà vu detected similarities between a review article he had published and an earlier article by a Texas researcher. Some journals now run accepted papers through eTBLAST software, which is freely available, to hunt for duplications prior to publication. Some senior faculty members contacted by Science say they would consider using Déjà vu to help guide hiring, promotion, and publication decisions.

    But how reliable is Déjà vu, and what do its developers hope to accomplish? Science examined many papers listed there and found that Déjà vu casts a wide net, scooping up innocent papers (such as translations) along with suspicious ones. Its large haul raises questions about writing and publication standards for scientific papers; it is also leaving frustrated scientists in its wake. “It's inappropriate to flag these sorts of papers,” says Lawrence Solin, a radiation oncologist at the Albert Einstein Healthcare Network in Philadelphia, Pennsylvania, who was angry to learn that he had three pairs of papers in Déjà vu, all written by him. “These people have a serious obligation to do this correctly or not do it at all. And in my view, they are simply not doing this correctly.”

    The vast majority of listings in Déjà vu, nearly 66,000, are from scientists who, like Solin, appear to be repeating their own previously published work. Repetitious reviews and incremental reports are part of an accepted tradition, and authors say they are less than thrilled to be fingered. Others say Déjà vu makes mistakes—for example, flagging similar studies on different populations.

    But Déjà vu's masters at UT Southwestern are not just out to nail plagiarists. They are challenging accepted and gray-area practices, particularly the tendency by authors and journals to recast previously published work as novel. “We don't consider ourselves the publication police,” says Garner. At the same time, he says, for Déjà vu's team, seeing what makes up the scientific literature has been one “of the most eye-opening experiences in our life.”

    Worst offenders

    Garner and his team of four run an efficient shop. The first step is automated: eTBLAST picks up suspicious papers based on similar titles and abstracts, and Déjà vu slaps them online for anyone to see. But eTBLAST isn't perfect, Garner acknowledges, so these papers are labeled “unverified”—a classification that includes more than 90% of Déjà vu's listings.

    Riding high.

    Skip Garner, with one of his horses, runs Déjà vu and receives dozens of tips—and complaints.


    Reviewing papers manually is a painstaking task, led primarily by Tara Long, a mathematics major who began working with Garner in 2006 while still in college. If a great deal of text, figures, and references matches that of another paper published earlier, it is classified as a “duplicate.” On average, duplicates whose full text has been examined share 85% of their text, says Long. (Each entry consists of two papers, the earlier one and the later one.)

    Scrutinizing papers, Long shifts them out of the unverified classification and into one of four main categories: Distinct, Sanctioned, Update, or Duplicate. The first two comprise only appropriate examples of repeated work, whereas in the latter two, suitability varies depending on the circumstance (for example, if a paper was reprinted with permission). Of the 5833 pairs of papers in these four groupings, 2124 are labeled duplicates. Another key question is whether papers have different authors. Déjà vu lists close to 66,000 pairs of papers with shared authors, whereas the rest, just over 9000, have different authors. Of the 2124 listings in the duplicate category, 258 have different authors on the earlier and later paper, suggesting that they may be examples of plagiarism.

    These are the ones Garner's group has focused on most aggressively, systematically contacting authors and journals. They have followed up on 165 cases so far and prompted some acknowledgments of wrong-doing and retractions.

    Many apparent instances of plagiarism picked up by Déjà vu reflect a strategy known as “patchwriting”—an underrecognized problem in scientific publishing, according to Garner. Patchwriters lift large portions of the introduction, scientific design, and other sections of a published paper, then plug in details from their own experiment. “They don't take the data, but they take the scientific design,” says Beth Notzon, who has taught classes on publication ethics to young physicians at M. D. Anderson Cancer Center in Houston, Texas, and is administrative editor at the International Journal of Radiation Oncology, Biology, Physics. “They're able to repeat the whole thing but in a different population of patients.”

    Notzon's journal was alerted to such a case by Déjà vu. A group in China had, by Déjà vu's estimate, copied more than 95% of a paper on breast cancer first published in 2003 in the International Journal of Radiation Oncology, Biology, Physics. The Chinese group changed the focus from breast cancer to nasopharyngeal cancer, which is much more common in those of Asian ancestry, and reported data from their own patients. The lead author of the original paper, Odilia Popanda of the German Cancer Research Center in Heidelberg, notes that she was rather miffed that the Chinese work, published in 2005, appeared in a higher profile journal, Clinical Cancer Research.

    The first author of the Clinical Cancer Research paper, Wei-dong Wang, an oncologist at Xinqiao Hospital in Chongqing, China, wrote in an e-mail message to Science that “our English skill was not good enough to meet the language requirements” of Clinical Cancer Research. “To publish our findings as quickly as possible, the first author Dr. Wang organized our results in the similar pattern of Popanda's publication,” Wei-dong Wang continued, referring to himself in the message. He stressed, however, that the type of cancer and the results were different.

    Wang also wrote that “we have done foolish things” and “we should express our findings in our own words.” Wang wrote in a later e-mail message to Science that he and his co-authors had decided to withdraw the paper, and it was retracted late last month.

    Wang's account of patchwriting jibes with what Notzon has seen in her classes. She was startled to find that many foreign scholars at M. D. Anderson, particularly those from Asia, consider it perfectly appropriate. “We had a young woman visiting from China who taught writing and editing in China, and she said laughingly, ‘Oh, we encourage this sort of thing because people don't have good idiomatic English.’” But, Notzon says, patchwriting is “wrong because it's really a kind of plagiarism—they're taking someone else's research idea.”

    Challenging standards

    More discomforting questions raised by Déjà vu focus on the norms of scientific publishing. Take reviews, which make up about 20% of the listed papers, Long estimates. They often contain duplicated material, particularly from the author's own published articles. “You can't just copy your introduction [on an article]. But to what extent is that wrong? There's definitely a gray area,” says Notzon.

    When it comes to repetition of their own writing, few scientists see a problem. “When you labor over a sentence, when you love that sentence, it's really hard to move too many commas around” if you use it again, says Douglas Mann, a cardiologist at Washington University in St. Louis, Missouri, who has five pairs of papers on Déjà vu, all 10 of them reviews authored or co-authored by him. Four sets are “unverified” and one is listed as a duplicate.

    “There's going to be redundancy” in review articles, Mann continues, echoing similar comments by others, “but I don't think that's scientific misconduct.” Some blame the system. Often when a topic is trendy, journals solicit many reviews from the same author, on the same subject, in a short period of time. Authors respond with repetitive articles. In original research papers, too, wording may overlap substantially. When it comes to writing introductions, “if you have a series of papers on the same topic, I can imagine some of the same narrative getting in there, consciously or unconsciously,” says William Gelbart, a geneticist at Harvard University.

    More clear-cut is the use of material published by other researchers without proper attribution. Rudolf Weiner, a bariatric surgeon at the hospital Krankenhaus Sachsenhausen in Frankfurt, Germany, was notified of a paper that Déjà vu declared a duplication of obesity research from another bariatric surgeon at Mount Sinai School of Medicine in New York City, Daniel Herron. “Between one-third and half of the article was essentially word for word taken from my article,” a review, Herron says.

    In an e-mail to Science, Weiner, the first author on the later paper, wrote that one of his co-authors “received the order to create an introduction for this article about morbid obesity” and “he made a copy (obviously) of an introduction from the previous article.” Weiner emphasized that the article was an overview, not original research. He declined to answer any additional questions. The co-author in question died suddenly, he says. The article was not retracted, and the journal, Surgical Technology International, did not return calls seeking comment.

    Falsely fingered?

    Patrick Bossuyt discovered 19 listings under his name and says all are ethical publications.


    One duplication spotted by Déjà vu led to a resignation. Rheumatologist Lee Simon of Harvard Medical School in Boston stepped down in March after Déjà vu determined that a review published by him in August 2004 describing new treatments for rheumatoid arthritis was similar to a paper released 13 months earlier, by Roy Fleischmann of UT Southwestern. Simon could not be reached for comment. Harvard spokesperson David Cameron confirmed that Simon had resigned and that Harvard had investigated the case, but gave no details.

    Guilt by association

    Garner believes that there's no problem with quoting one's own or others' work as long as the later article cites the earlier one and makes clear what's being repeated. Translations are an obvious form of approved duplication. Indeed, a paper that Garner, Long, and their colleagues published earlier this year in Science about Déjà vu (Science, 6 March, p. 1293) will likely fall into Déjà vu's duplicate category if a translation appears in a Spanish journal, as one has requested. Garner says that because the Spanish version will note the publication in which the article was first published, he has no qualms about appearing in Déjà vu.

    But many with whom Science spoke disagree: Surfacing in Déjà vu, they say, suggests wrongdoing. They also lament Déjà vu's decision to publicly post tens of thousands of unverified papers. “A list like this that's computer generated can cause much harm and then put the onus on a young scientist to explain away why their entirely appropriate use of review material got them onto the list,” says Jeffrey Macklis, a neuroscientist at Harvard whose own reviews appear on Déjà vu's unverified list because of their similarity. Macklis says all of these papers properly cited his previous reviews. If just showing up in Déjà vu suggests wrongdoing, as he worries it does, that's comparable to McCarthy-era blacklists from the 1950s that, he says, “were feared” in his house. “This is meant to be a shame-and-blame list,” says Karl-Heinz Krause, a physician who studies stem cells at the University of Geneva in Switzerland. He appears in Déjà vu's duplicate category because, he says, a journal in which he published, Swiss Medical Weekly, republished a paper of his in a supplement without notifying him.

    Yellow and green.

    Some of Déjà vu's categories, such as “duplicate,” may reflect inappropriate publication, whereas others, such as “distinct,” indicate no problem.


    Patrick Bossuyt, a clinical epidemiologist at the University of Amsterdam in the Netherlands, anxiously searched Déjà vu after a colleague told him that several of his papers were listed there as “fraudulent.” (In reality, Déjà vu has no such category.) At least 10 of his 19 listings appeared in one of the “safe” categories; the others were mostly unverified, with one labeled a duplicate. The unverified listings, he says, refer to a combination of translations, updates, and distinct papers, whereas the duplicate listing captures two identical introductory articles used to present a series in different issues of Nature Reviews Microbiology. Bossuyt calls them all examples of ethical publication but still worries that so many listings could sully his reputation.

    Some also question Déjà vu's accuracy, pointing to papers it had flagged that they deem unique experiments. Nader Rifai, a clinical chemist at Harvard Medical School, appears in three listings in Déjà vu with articles that are “completely different,” he says. One includes two papers that investigated two distinct drugs. Another, for which Rifai is only on the earlier paper and not the later one, examined hormone levels associated with diabetes, with one experiment in men and one in women, he says. Walter Willett, a prominent epidemiologist and nutrition expert at Harvard School of Public Health, had a similar experience: Two of the six unverified listings on which he appears in Déjà vu describe a similar study of high blood pressure performed in different populations, men and women. Willett's other listings are reviews, which have to “cover the waterfront,” he says. “If you come back and review something in 2 years, it will probably be 80%” like the earlier article.

    Shades of gray

    Just 2 years after its launch, Déjà vu has become the place to go to for anyone who wants to report suspicions of plagiarism or inappropriate duplication. It receives dozens of tips, Garner says, from “people who reported their previous mentors, their department chairman.” Garner's group spends hours contacting journals and authors to alert them of Déjà vu's findings.

    Garner says his aim is cleaning up the literature and coaxing scientists and journals to reconsider what's appropriate. Gelbart, himself a member of the booming club listed in Déjà vu, agrees that including many types of repetitious work is “useful as fodder for the scientific community to decide whether this falls within the norms of acceptable behavior or not.” His pair of listings in Déjà vu were progress reports “written with a lot of boiler-plate,” he says, intended to get the word out about FlyBase, a database of fruit fly genes that began in the early 1990s.

    Journals have responded to Déjà vu in different ways. Many have ignored inquiries about suspect papers. Journals in India and Egypt contacted by Science because the database listed them as having published more than a dozen duplicate papers did not respond.

    Some journals have embraced Déjà vu or adjusted their standards because of it. Natalie Marty, managing editor of Swiss Medical Weekly—which Krause says republished his paper without prior notification—admitted that it had reprinted many papers in a supplement but cited the initial publication. PubMed, however, failed to pick up that the papers were reprinted, and Marty says the journal has notified PubMed to add a comment to this effect. The journals Annals of Surgery and Anaesthesia and Intensive Care both learned of duplication cases from Déjà vu and now screen accepted articles with eTBLAST, available for free online, before they're printed. “It gives us more confidence about what we publish,” says Pamela Nevar, the managing editor of Annals of Surgery.

    John Loadsman, an anesthesiologist in Sydney, Australia, and editor of Anaesthesia and Intensive Care, hopes to set up an automated system to check every paper submitted to the journal with eTBLAST. His journal had 21 cases listed in Déjà vu, three of which turned out to be “true cases of duplicate publication,” he says. Loadsman doesn't believe false positives are a problem, as “it's very easy to work out” which are real.

    Some researchers say they would willingly use Déjà vu to check papers when making hiring and promotion decisions. But others—particularly those who say they appear in Déjà vu wrongly—consider that a terrible idea. Witold Filipowicz, an RNA biologist at the Friedrich Miescher Institute in Basel, Switzerland, says it's useful for scientists to be “aware that there is a watchdog.” He emigrated from Poland 25 years ago, where at the time, as in other Eastern European countries, promotions, funding, and other career decisions were primarily “based on number of publications,” he says. Although that has changed, Filipowicz estimates that now worldwide, “50% or 70% of what is published is just of no value.” Garner agrees that one issue underscored by Déjà vu is an excess of journals and of review articles in particular.

    Still, Filipowicz thinks Déjà vu ought to highlight true plagiarism and lessen its emphasis on articles that are not original research. (One of his own papers, a symposium report based on an earlier publication, has been flagged by Déjà vu as an unverified case.) “If 90% [of listings] are benign,” he says, “they will in a way muddy the real crimes,” distracting attention from where he says it should lie.

  11. Scientific Publishing

    Repetition Is Not Duplication

    1. Jennifer Couzin-Frankel

    In response to concerns about updated reviews of clinical research being flagged in Déjà vu, the plagiarism database's operators created a special category for legitimate duplications—but they refuse to remove the papers entirely.


    Kay Dickersin, an epidemiologist at Johns Hopkins University in Baltimore, Maryland, grew intrigued by Déjà vu after encountering a case of plagiarism in a class she taught. On a lark, she plugged her own name into the database and was shocked to see a pair of papers she'd authored come up. “People are going to think I'm plagiarizing myself,” she said. Quickly, Dickersin realized she had an even bigger problem on her hands: Déjà vu was riddled with papers like hers from the Cochrane Collaboration, whose U.S. center she runs, because the center's specialty is publishing updated reviews of clinical research. They are repetitive by design.

    The Cochrane database, containing about 3800 papers in all, has a whopping 2879 papers listed on Déjà vu. We “realized straight away that people would say, ‘These guys in Cochrane are just ripping each other off,’” says Nick Royle, CEO of Cochrane, based in Oxford, U.K. He immediately wrote to Harold “Skip” Garner of the University of Texas Southwestern Medical Center in Dallas, who runs Déjà vu.

    Garner responded by creating a new category for the Cochrane papers: All 2879 are now listed as “sanctioned,” a class of legitimate duplications. “To his credit,” says Royle, Garner grasped the problem. Still, Royle is uneasy with the outcome, concerned that the term “sanctioned” will be misconstrued as negative. Dickersin, who admits she might be “paranoid” about showing up in Déjà vu, would rather see the papers removed from the database altogether. But that's something Garner won't do.

    Garner has become accustomed to fielding queries from panicked, nervous, or irate scientists. He has written personal notes to more than 100 people in an attempt to assuage concerns. But not once has a listing been pulled, he says, and he won't grant special exemptions, no matter how eminent the researcher.

    Does Garner worry about posting inaccurate listings or insinuating that someone has plagiarized when they have not? “Hell, yeah,” he says. “This is a touchy subject, and it can affect people's careers.” With that in mind, Déjà vu's minders examine papers brought to their attention by the authors, and Garner then writes them to explain that the paper was reviewed and—in most cases—determined to be benign. It's then placed in an innocuous category. Pulling papers, Garner believes, would dilute Déjà vu's potency. “It's valuable to show other categories where things are highly similar,” he says, “but also valuable to science.”

  12. The 2010 Census

    America's Uncounted Millions

    1. Constance Holden

    It is getting increasingly difficult to count a mobile population, but an “adjustment” based on postcensus sampling is politically unacceptable.

    Face to face.

    The U.S. census didn't use the mails until 1970.


    Next April, the United States will embark on by far the most expensive decennial census the country has ever seen: It will cost about $15 billion—double the cost of the 2000 census—to count an estimated 309 million U.S. residents. Billed as the country's “largest peacetime mobilization,” it will send out 1.2 million temporary workers in early May to knock on the doors of about one-third of the population: the people who haven't sent in their census forms. Despite the best efforts of these workers, who may go back to the same addresses as many as six times, many people will be missed.

    How to deal with this undercount has occupied statisticians for decades. It's also of perennial interest to politicians because the census is used to reapportion seats in the House of Representatives. The Obama Administration has repeatedly said it has no plans to conduct an “adjustment” in the 2010 census count, as many statisticians have advocated. But that hasn't stopped several House Republicans from agitating about such a possibility, which they fear would benefit Democrats, and they have accused the Administration of attempting to politicize the census.

    The flames have been fanned as President Barack Obama tries to get his Commerce team in place. In February, alarm calls went out when a White House spokesperson seemed to say that Obama planned to have the census director report directly to the White House, bypassing the commerce secretary. That fuss is one reason Commerce nominee Judd Gregg, a Republican senator from New Hampshire, withdrew his name. Former census director Kenneth Prewitt, the top choice to head the bureau, dropped out soon afterward as the political heat rose.

    The new census nominee, sociologist Robert Groves of the University of Michigan, Ann Arbor, has an impeccable reputation in the statistical community. But he has been greeted with suspicion by Republicans who identify him with past attempts to adjust, or correct, the census. He spent a good deal of this month, as well as part of his confirmation hearing on 15 May, reassuring legislators that an adjustment is not on the table and that it would be too late to try in the 2010 census even if he wanted to. “Any major change would bring so much risk that the benefits would have to be clearly established,” he said.

    The adjustment in question is also referred to as “sampling,” which Republican critics say violates the Constitution's call to count every citizen rather than just sample the population. Many statisticians argue that casting the debate this way distracts from real concerns about the future of the census in a high-tech and increasingly mobile and diverse society. In fact, the technique in question does not substitute sampling for counting. Rather, it's a methodology for assessing the accuracy of an inevitably incomplete count.

    Missing millions

    The undercount problem was first recognized after the 1940 census, when the Defense Department was registering men for the draft. “More young black males registered for the draft than the census bureau thought were in the country,” says historian Margo Anderson of the University of Wisconsin, Milwaukee. It's more of a political issue in the United States than it is elsewhere, she adds, because here it is used to determine how states are represented in Congress. Furthermore, as census figures came to be used for more and more purposes, such as federal revenue-sharing, there was “huge pressure on getting census figures right,” says former House staffer David McMillen, now at the National Archives and Records Administration.

    By the late 1980s, Republicans were worrying that any adjustments to the undercount would benefit Democrats because the people who end up not being counted—the poor and minorities—tend to vote Democratic. In 1990, for example, according to a 2004 report from the National Academy of Sciences (NAS), there were 16.3 million erroneous or duplicate counts and 20.3 million people weren't counted at all. Blacks—young black males in particular—were missed at a rate four times as high as that for whites that year, according to the U.S. Census Bureau. Affluent whites, on the other hand—such as people with multiple homes or students counted both by schools and by their parents—are disproportionately found in overcounts. In 2000, NAS says 15.9 million people were missed, whereas the overcount was 17.2 million.

    By the time planning was under way for the 1990 census, adjustment had become a very contentious issue. The methodology in question is called dual system estimation (DSE). It uses a post-enumeration survey conducted independently from the census but very soon afterward. As former census director Prewitt explains, DSE is based on “capture-recapture” principles for counting wildlife. First you count as many as you can of a population and tag them (the census). Then in a separate operation (the post-enumeration survey), you capture a representative sample of the population and see how many have already been tagged (counted). From the ratio of tagged to newly captured, you can make a revised, and presumably more accurate, estimate of the total population.

    In 1990, President George H. W. Bush's census director, Barbara Bryant, planned to use a post-enumeration survey of 300,000 households as the basis for an adjustment of the final census count. Census nominee Groves, who was associate director of statistical design from 1990 to 1992, favored the approach, as did most of the bureau's senior staff, according to McMillen. But Secretary of Commerce Robert Mosbacher Sr. quashed the plan.

    The bureau again planned an adjustment for the 2000 census. This time it was challenged by a case that went to the Supreme Court in 1998. In the suit, Department of Commerce v. U.S. House of Representatives, Indiana Republican Representative Gary Hofmeister claimed that if the plan went forward, Indiana would lose a seat that would presumably go instead to a predominantly Democratic state. The court ruled that any adjustment of the final census count based on DSE could not be used for apportionment—that is, reallocation of the fixed number (435) of House seats among the states. It did not rule out DSE for other uses, including congressional redistricting (redrawing House district lines within states) and allocation of federal funds.

    Using a post-enumeration survey as an accuracy check on the census is seen as an important way for the census bureau to evaluate its data, but to date it has not been used to adjust the final count. And it's not clear it ever will be. According to Hermann Habermann, deputy director of the U.S. Census Bureau from 2002 to 2006, statisticians inside and outside the bureau have said that there are too many possible flaws in the procedure to use it for that purpose. In 2003, the Census Bureau said there would be no adjustment for other purposes either, such as redistricting or the allocation of federal funds. Since then, officials have repeatedly stated that no adjustment is planned for the 2010 census. It's not even “technically feasible” at this late date, says Prewitt. The census “is a rocket that's on the launch pad, and they're about to ignite it. They can't redesign rocket fuel at this stage.” Asked at his confirmation hearing about an adjustment in the future, Groves said, “I have no plans to do that for 2020.” In his prepared material, he added, “I believe the Supreme Court ruling stands as the guidance on this issue.”

    None of this has stopped Republicans from fretting about the possibility that the procedure will be revived in 2010. As one House Republican staffer says, “If they wanted to, they could probably do it.” In a statement last month, House Minority Leader John Boehner from Ohio accused Groves of having “advocated a scheme [in 1990] to … manipulate Census data, rather than simply conducting an accurate count of the American people.” Warned Boehner: “We will have to watch closely to ensure the 2010 census is conducted without attempting similar statistical sleight of hand.”



    A moving target

    Aside from the adjustment issue, other improvements to the census process are sorely needed, say statisticians. There has been one major development: Next year, for the first time, citizens won't have to grapple with the Long Form, a lengthy questionnaire that used to go to one in six respondents; fewer and fewer people were bothering to fill it out. It's been replaced by the American Community Survey (ACS), a “rolling” survey that covers 300,000 people each month.

    Although the ACS is generally hailed as an advance in obtaining thorough and timely information, it doesn't compensate for the fact that U.S. residents are getting ever harder to count. With traditional households few and far between, the census must count an increasingly mobile population; one where many people are suffering dislocation because of the economic recession or other factors; one with millions of illegal immigrants as well as others who prefer to avoid contact with the feds; one where even rich people can be hard to count because of the increase in hard-to-access gated communities.

    With the long form out of the way, “it means you can think of doing the census in perhaps radically different ways,” says statistician Stephen Fienberg of Carnegie Mellon University in Pittsburgh, Pennsylvania. The census is based on a system of physical addresses; its purpose is to locate an individual at “a particular geographic spot” at a particular time, says Anderson. But as the population becomes harder to track down, she says the postal system—used for the first time in the 1970 census—is already becoming “archaic” as a means to deliver the form. And modern modes of communication, the Internet and mobile phones, have the disconcerting feature of not being tied to any location. “The real question, I think, is, can you move to a person-based instead of a physical household census,” says Fienberg.

    As for reducing the undercount, the main strategy being explored is the use of “administrative records”—that is, tax data, Social Security numbers, driver's licenses, and even data from utilities to supplement census forms, an idea that's been around at least since a 1972 report, America's Uncounted People—the first of a long line of census reports from NAS. But heavy reliance on such a strategy is a long way off, not only because of privacy and security concerns but also because of the country's “decentralized statistical system,” Anderson says.

    Centralizing records such as driver's licenses would be in line with an even more radical step: a national registry. Prewitt is a proponent. In some other countries, particularly in northern Europe, traditional censuses have been eliminated, says Prewitt, as citizens are required to tell the government every time they move. “Without a national registry, we will seriously undercount,” he says, pointing out that if we miss half the estimated 12 million immigrants now in the country illegally, that's 6 million right there.

    Unfortunately, says Anderson, “unresolved debates” from past censuses “are obfuscating [and] drowning out other issues we should be talking about.”

  13. China

    An Unprecedented Dilemma: One University, Two Political Systems

    1. Richard Stone

    The University of Macau's plans for a new campus in mainland China pose a challenge to preserving academic freedom in the former colony.

    MACAU—A hilly island crowned with wind turbines and little else beckons just off the western shore of this cramped city. Macau's land-hungry authorities ogle its barren real estate, a kind of Promised Land of ample space. But Hengqin Island is also a political minefield. Macau is planning to build a new campus for the University of Macau (UM) on Hengqin; the catch is that Hengqin is part of mainland China, so the new campus may no longer come under Macau's legal and administrative system. At stake are academic freedoms and an open social milieu that Macau residents enjoy and most Chinese do not: unfettered Internet access, for instance, and a legal system that excludes capital punishment.

    Making a case for space.

    Rector Wei Zhao outlines Hengqin plans to students.


    The proposed Hengqin campus, expected to open as early as 2012, is “an unprecedented challenge to ‘one country, two systems,’ and for that reason it is extremely sensitive,” says a UM professor who requested anonymity. He predicts an exodus from UM if it adopts mainland rules. Rector Wei Zhao has vowed to preserve UM's identity. Ultimately, however, that will be Beijing's call. “It is our understanding that the decision to grant our wishes lies with the highest level in the central government,” university council chair Daniel Tse wrote in a 6 May open letter to the UM community.

    Negotiators from the mainland China and Macau governments are now trying to craft a blueprint for a new campus that satisfies UM's concerns and those of officials in Zhuhai, the city in Guangdong Province to which Hengqin belongs. The pressure is intense. During a visit to Macau in January, Chinese Vice President Xi Jinping, heir apparent to President Hu Jintao, extolled development of Hengqin as a way to diversify Macau's economy, which now depends on tourism revenue from gambling. Then in February, China's powerful National Development and Reform Commission (NDRC) touted the Hengqin campus. “Xi's blessing and NDRC support make it a fait accompli,” says the UM professor. “They have to find a way to make it happen.”

    When Portugal returned Europe's first and last Asian colony to China in December 1999, China brought Macau under its military and foreign affairs umbrella. A Beijing-approved chief executive now governs the Macau Special Administrative Region. But as it did for Hong Kong in 1997, China granted Macau considerable autonomy, including a pledge to leave Macau's European-style legal code intact until at least 2049.

    For several years after its return to the motherland, Macau flourished as China's only haven for legal gambling. Last autumn, however, China imposed restrictions on mainlanders' travel to Macau, and casinos reported a 10% decline in revenue. This gave a fresh impetus to efforts to diversify Macau's economy. That will be no mean feat, as rapid growth has already left the tiny city—a mere 29 square kilometers comprised of Macau peninsula and two islands, Taipa and Coloane—bursting at the seams.

    UM needs elbow room, Zhao says. Its present campus is hemmed into a sliver of Taipa, capping enrollment at about 6600. He notes that whereas Harvard University boasts 960 m2 of campus per student and Tsinghua University in Beijing has 150 m2, UM has precisely 8.7 m2. The Hengqin campus, on a 1.4-km2 plot across a narrow inlet from Taipa, would comfortably accommodate 10,000 students in a residential college system modeled after Yale University's, says Zhao.

    In March, Zhao and other UM administrators held a series of 17 meetings with academic and local communities. Faculty and students credit UM's leaders for their openness; at the same time, many have argued that the university would change irrevocably if the new campus were under mainland jurisdiction. “I go to the University of Macau because it's not China,” one mainland Chinese student flatly declared at a town hall meeting in March.

    That's a sore point for the new host jurisdiction of Zhuhai. Local officials have questioned how Zhuhai would benefit if a piece of Hengqin, the largest of Zhuhai's 146 islands, were essentially handed over to Macau. “In the short term, there may appear to be no advantage to Zhuhai. But having a brand-name university right on their doorstep will help them tremendously,” argues Zhao. For starters, he says, UM intends to adopt a preferential admissions policy for Guangdong applicants.

    In a presentation last month, Zhao and fellow administrators portrayed the Hengqin campus as an island within an island. Macau's legal system and the university charter would prevail, they said, and Macau would oversee all services, including phone lines and Internet. UM leaders said “segregation measures”—such as forests and artificial lakes—surrounding the campus would “ensure security.” They also proposed a dedicated bridge between the Hengqin campus and Taipa so that staff and students need not pass through immigration controls. (UM would retain its Taipa campus.) This month, the university commissioned a “first draft” of a campus plan that would serve as the basis for negotiations between Macau and Guangdong.

    There are bound to be bumps along the way. Last month, Macau legislators questioned whether the city can afford UM's expansion; some worry that if the Hengqin campus were subject to Chinese law, UM would struggle to maintain, let alone boost, enrollment. In contrast, one Zhuhai power broker is worried about too much liberty: Running the Hengqin campus under Macau's legal system would be “seriously unlawful” and “damage Zhuhai's interests,” petroleum magnate Li Jiankang, a representative of Zhuhai People's Congress, told the newspaper Nanfang Dushi Bao last month.

    Still, with China's presumptive next president eyeing developments, few people doubt the Hengqin expansion will occur under the banner of one country, two systems. The only question is which system that will be.

  14. Plant Biology

    Stressed Out Over a Stress Hormone

    1. Elizabeth Pennisi

    The hormone ABA lets plants handle rough times and holds promise for making drought-resistant crops, if only researchers could nail down its molecular partners.

    Frugal water use.

    Plants with the most enhanced sensitivity to the hormone ABA (far right) do better than their wild counterparts (far left) in drought.


    When stresses mount, plants can't simply walk away from their problems or head to the nearest bar. Instead of turning to a stiff drink, plants can count on a chemical called abscisic acid (ABA) to bail them out. Discovered in the 1960s by groups searching for substances that regulate bud dormancy and the shedding of leaves and fruits, ABA has turned out to be a vital hormone that plants use when conditions are rough. Concentrations of ABA in leaves shoot up when plants dry out or get too cold; and as the hormone spreads throughout the plant, it stimulates genes involved in a variety of responses. The hormone curtails water loss, helps seeds wait out bad conditions, and inhibits root and other vegetative growth.

    Plant biologists have labored for decades to understand how plant cells sense and react to ABA. The payoffs could be substantial. “If, based on improved knowledge of how plants perceive and respond to ABA, we can improve crop yield in the face of drought and other stresses, this can have a major impact on food production and freshwater availability,” says Sarah Assmann, a plant biologist at Pennsylvania State University, University Park.

    But the hunt for an ABA receptor, a plantcell protein that recognizes the hormone and conveys its gene-regulating orders to the nucleus, has been full of frustration and controversy. In 2006, plant biologists finally celebrated the first report of such a receptor. And within a year, two other putative ABA receptors were identified in high-profile papers. But the initial receptor discovery soon had to be retracted, and the other candidate receptors have also fallen under intense scrutiny. A fourth, reported just 5 months ago, is provoking debate, too. Those in and outside the field are shaking their heads. “The whole field of plant biology has been hurt,” says Alan Jones, a cell biologist at the University of North Carolina, Chapel Hill.

    Now, two more research teams are jumping into the fray. They have independently homed in on yet another ABA receptor, which they describe on pages 1064 and 1068. “They provide the first convincing report of a class of proteins that link ABA binding with downstream ABA responses,” says Richard Macknight of the University of Otago in Dunedin, New Zealand. “They are landmark papers.” Given the events of the past 3 years, however, some plant biologists reserve final judgment on whether the hunt for an ABA receptor is over. The two groups offer “the best data set” so far, Jones says, “but no single work is sufficient to reach such an important conclusion.”

    First but flawed

    Several factors have stymied researchers on the ABA receptor trail. The hormone's influence is pervasive, affecting many genes and pathways. These pathways can sometimes overlap, making it difficult to find and characterize mutant plants with clear-cut defects in ABA responses. The hunt is also complicated by the fact that plants have evolved their own distinct receptors, so computer scans that look for plant genes similar to those that code for receptors in animals can miss likely candidates.

    Breakaway growth.

    Seeds from plants with mutant receptors (right three) don't recognize ABA's signal to stay dormant and are at risk of sprouting in adverse conditions.


    Test-tube studies offer their own problems. Plant cells contain packets of protein-destroying enzymes that are easily released when the cell is disturbed, so isolating intact proteins from plant cells is challenging. And proving that a protein and ABA bind specifically, and at realistic hormone concentrations, “is an easy thing to screw up,” says Michael Sussman, a biochemist at the University of Wisconsin, Madison. Thirty years ago, Sussman notes, experiments he did “proved” that talcum powder was a receptor to the plant hormone cytokinin.

    Nonetheless, a team led by Robert Hill of the University of Manitoba in Winnipeg, Canada, appeared to have overcome these hurdles in 2006, identifying a protein called FCA as a probable ABA receptor. FCA works in a plant cell's nucleus, where it binds to strands of messenger RNA, effectively curtailing the production of a protein called FLC that otherwise inhibits flowering. FCA requires a partner protein, FY, to do this job, and Hill's team reported in the 19 January 2006 issue of Nature that ABA bound FCA, interfering with its interactions with FY both in lab tests and in living plants. As a result, the amount of FLC messenger RNA (and presumably, the resulting protein) soars, delaying flowering. “A door to understanding ABA perception has been opened,” declared Julian Schroeder and Josef Kuhn of the University of California (UC), San Diego, in an accompanying commentary.

    It slammed shut fast, however. A group led by Macknight had trouble confirming the FCA results and notified Hill's team, which was having similar problems. Hill and his group spent 6 months in vain trying to replicate their own work. Ultimately, in the 11 December 2008 issue of Nature, Macknight's group published its negative findings on FCA, and Hill's team formally retracted the 2006 paper, citing “errors in the calculations” and “difficulties” with the original data.

    Even as that episode was playing out, other putative ABA receptors emerged—and they would prove almost as controversial. Schroeder and Kuhn's commentary had pointed out that FCA wasn't involved in other plant properties, such as seed dormancy or root growth, that ABA was known to regulate. That meant more ABA receptors were waiting to be discovered, plant biologists surmised.

    Da-Peng Zhang of China Agricultural University in Beijing claimed to find one late in 2006. It was part of an enzyme called magnesium chelatase, known to be involved in making chlorophyll. Zhang and his colleagues showed that it bound to ABA and that disabling the magnesium chelatase gene in Arabidopsis plants made them unresponsive to ABA. Their seeds germinated even after exposure to ABA, and the stomata didn't close down the way they should have when water was withheld, Zhang's team reported in the 19 October 2006 issue of Nature.

    Doubts about this putative receptor have grown, too. Mats Hansson of the Carlsberg Laboratory in Copenhagen recently looked at barley plants in his lab that have mutations disabling the enzyme's gene. If the chelatase was an ABA receptor, those mutants should have little to no response to treatments with the hormone. But the mutant plants reacted like normal ones, he and his colleagues reported online on 28 January in Plant Physiology Preview. “We were very disappointed,” Hansson recalls.

    Magnesium chelatase, like FCA, resides inside a plant cell. Yet previous ABA studies had indicated that plants sense the hormone outside the cells as well. That's why Ligeng Ma of the National Institute of Biological Sciences in Beijing and his colleagues took a different tack as they hunted for the ABA receptor. They focused on a well-known family of membrane-spanning molecules called G protein–coupled receptors that often convey signals from the cell surface. In a computer search for similar gene sequences in the Arabidopsis genome, Ma's team found a gene for a new one, GCR2. Its protein bound to ABA, Ma and his colleagues discovered. And when the investigators disrupted the GCR2 gene in Arabidopsis, its seeds sprouted prematurely and the mutant plant's stomata got stuck open, indications that ABA was not able to do its job (Science, 23 March 2007, p. 1712).

    However, within months, this ABA receptor was challenged. Jones and colleagues argued that GCR2 was neither a G protein–coupled receptor nor a transmembrane protein (Science, 9 November 2007, p. 914). And several other papers have also questioned the idea of GCR2 as an ABA receptor, with Macknight offering data indicating it doesn't actually bind to the hormone.

    Zhang and Ma maintain that the chelatase and GCR2 are ABA receptors, and each continues to study his respective discovery. In the meantime, a fresh ABA receptor controversy has been brewing. In the 9 January issue of Cell, Assmann linked a new subtype of G protein–coupled receptors called GTGs to ABA perception. The proteins bind ABA, and mutant plants lacking GTGs lose most of their ability to respond to the hormone, she and her colleagues reported.

    But test-tube assays that measure how effectively the proteins latch onto ABA indicate that only 1% of the GTG present was locking up hormone, leaving open the question of just how good a “receptor” the GTG proteins are. Such studies are difficult to do with membrane proteins. In animals, the equivalent protein is an ion channel, so it's possible that's what GTGs do in plants, Erwin Grill and Alexander Christmann of the Technical University of Munich in Germany suggested in a Cell commentary. Until more evidence comes in, “we should just be cautious,” says Jones.

    The real McCoy?

    Two groups have brought the newest candidate ABA receptor to light. Grill decided to work his way into the hormone's signaling mechanism by starting with two enzymes called ABI1 and ABI2. Mutations in the genes for these proteins produce plants that no longer respond well to ABA, so Grill went fishing for plant-cell proteins that bind with ABI1 or ABI2. The researchers eventually isolated two new proteins, dubbing each a “regulatory component of ABA receptor” (RCAR).

    Their experiments showed that although either ABI1 or ABI2 alone bind weakly to ABA, complexes of one of the enzymes and an RCAR bind quickly and strongly to the hormone. The team concluded that the hormone starts its signaling cascade by binding to the RCAR-enzyme complexes and shutting down the enzyme's activity.

    Unbeknownst to Grill until a year ago, Sean Cutler of the UC Riverside was homing in on the same ABA receptor complex using a very different approach. His team hunts down molecules that can control plant growth. They found one, which they called pyrabactin, that revved up ABA activity when it was applied to plants, delaying seed germination and improving drought tolerance. “I figured characterizing pyrabactin would lead to an understanding of how to ‘drug’ the ABA pathway,” Cutler recalls. “I also hoped, very wishfully, that we might get a receptor out of our efforts.”

    Receptor reconnaissance.

    Blue stain shows where a new putative ABA receptor is in a seedling.


    It took 3 years, but Cutler's wish came true when experiments showed that a protein that pyrabactin binds to, which Cutler dubbed PYR1, links up with a molecule called HAB1. HAB1, like ABI1 and ABI2, is a phosphatase, an enzyme that removes phosphate groups from the amino acids on other proteins, often as part of a signaling cascade.

    As it turns out, PYR1 was a member of the family of proteins that Grill calls RCAR. (The family has about 14 members, several of which influence ABA activity.) In work similar to Grill's, Cutler showed that ABA binds to PYR1, which then inhibits its phosphatase partner. In test-tube experiments, ABA doesn't inhibit phosphatase activity, except when PYR1 or some of its relatives is present as a gatekeeper. The test-tube studies “are some of the most important strengths of the two papers,” says Sussman. “It fits in with the biology.”

    Others are more cautious. Assmann calls for more control experiments in the binding studies. And the data from the two groups differ in the degree to which the enzymes shut down by ABA are themselves required to promote the interactions between the hormone and RCAR/PYR1 molecules. Are the enzymes part of a true receptor complex, or are RCAR/PYR1 alone the hormone receptors, which then act upon the enzymes? “There are some discrepancies,” says Jones.

    Given all the skepticism about previous finds, adds Jones, any proposed ABA receptor won't be validated by a paper or two. “There have been a number of red herrings in this field,” Assmann agrees, “so at this point there may be some reluctance to accept even bona fide ABA receptors.” For a hormone that helps plants deal with stress, ABA is certainly stressing out the plant biology community.