Policy ForumSCI POLICY

Self-correction in science at work

See allHide authors and affiliations

Science  26 Jun 2015:
Vol. 348, Issue 6242, pp. 1420-1422
DOI: 10.1126/science.aab3847

Embedded Image
ILLUSTRATION: DAVIDE BONAZZI

Week after week, news outlets carry word of new scientific discoveries, but the media sometimes give suspect science equal play with substantive discoveries. Careful qualifications about what is known are lost in categorical headlines. Rare instances of misconduct or instances of irreproducibility are translated into concerns that science is broken. The October 2013 Economist headline proclaimed “Trouble at the lab: Scientists like to think of science as self-correcting. To an alarming degree, it is not” (1). Yet, that article is also rich with instances of science both policing itself, which is how the problems came to The Economist's attention in the first place, and addressing discovered lapses and irreproducibility concerns. In light of such issues and efforts, the U.S. National Academy of Sciences (NAS) and the Annenberg Retreat at Sunnylands convened our group to examine ways to remove some of the current disincentives to high standards of integrity in science.

Like all human endeavors, science is imperfect. However, as Robert Merton noted more than half a century ago “the activities of scientists are subject to rigorous policing, to a degree perhaps unparalleled in any other field of activity” (2). As a result, as Popper argued, “science is one of the very few human activities—perhaps the only one—in which errors are systematically criticized and fairly often, in time, corrected” (3). Instances in which scientists detect and address flaws in work constitute evidence of success, not failure, because they demonstrate the underlying protective mechanisms of science at work.

Still, as in any human venture, science writ large does not always live up to its ideals. Although attempts to replicate the 1998 Wakefield study alleging an association between autism and the MMR (measles, mumps, and rubella) vaccine quickly discredited the work, it took far too long—12 years—for The Lancet to retract that fatally flawed article. By contrast, problems flagged in the January 2014 Obokata pluripotent stem cell papers led to a prompt investigation by the research institute and Nature retracting the papers by July 2014.

Leaders in the research community are responsible for ensuring that management systems keep pace with revolutions in research capacity and methods. Consistent with their self-correcting norm, scientists are actively addressing the disconcerting rise in irreproducible findings and retractions. As the Economist article itself noted, PLOS One and Science Exchange had begun an initiative “through which life scientists can pay to have their work validated by an independent lab”; Nature had initiated an 18-point checklist for authors “to ensure that all technical and statistical information that is crucial to an experiment's reproducibility or that might introduce bias is published”; and Perspectives on Psychological Science developed “a section devoted to replications” (1). Conferences such as the Association for Computing Machinery's Special Interest Group on Management of Data have begun to require reproducibility in accepted papers. After Obokata, Nature announced measures to increase the likelihood that misrepresented visuals will be detected in the review process (4). Other efforts include tighter financial disclosure rules, journal guidelines mandating increased transparency [see Nosek et al., page 1422 in this issue (5)], and increased data disclosure (6). Innovations such as Cross Mark and Retraction Watch have made it easier for scholars to purge retracted work from the scholarly dialogue.

INCENTIVES FOR QUALITY AND CORRECTION. Nosek and his colleagues have argued that some key incentive structures embraced by academia are counterproductive. Researchers are encouraged to publish novel, positive results and to warehouse any negative findings (7). Cash bonuses paid in a number of countries have increased the number of submissions to prestigious journals (8), and in some instances that work has been fraudulent. Growing numbers of biomedical students funded by the U.S. National Institutes of Health contribute to a pipeline of researchers in serial postdoctoral positions, as well as to declining morale and ever-lengthening time until researchers obtain their first independent research grant.

We believe that incentives should be changed so that scholars are rewarded for publishing well rather than often. In tenure cases at universities, as in grant submissions, the candidate should be evaluated on the importance of a select set of work, instead of using the number of publications or impact rating of a journal as a surrogate for quality. This practice is used in nominations for election to the NAS and selecting honorees in professional societies.

The peer-review process should do a better job of mentoring young reviewers, increasing the clarity and quality of editorial response, and uncovering instances in which a reviewer is biased for or against a particular work. Although this may add to the reviewers' burden, one solution, used by eLife, is for reviewers to share their comments with each other and collaborate on a response before sending comments to the author with the editorial decision. Science's cross-review, in which reviews are exchanged between reviewers, allows editors to capture broader input on the reviews before returning them to the authors. Were some reform to be implemented to address bias in reviewing, along with an incentive structure that rewards publishing quality rather than quantity, we suspect that the integrity of science could be better protected at no net increased cost in reviewers' time.

Reliance on the term “retraction” may create a disincentive to act in the best interests of science. The word “retraction,” with its negative connotation, covers withdrawal of scholarship both for inadvertent error and for misconduct. Yet, voluntary withdrawal of findings by a researcher eager to correct an unintended mistake is laudatory, in contrast to involuntary withdrawal by a duplicitous researcher who has published fraudulent claims. Alternative nomenclature such as “voluntary withdrawal” and “withdrawal for cause” might remove stigma from the former while upping it for the latter. “Voluntary withdrawal” would be for papers that are wrong in major respects, cannot be fixed with just a correction, but are not a result of fraud or misconduct. Authors would be encouraged to take this route to avoid leaving confusing papers in the literature and to preserve their reputations. Ideally, some statute of limitations should be placed on voluntary withdrawals. “Withdrawal for cause” would be invoked any time fraud or misconduct taints the published literature. There should be no statute of limitations on withdrawal for cause.

In a similar vein, “conflict of interest” implies that disclosed relationships are corruptive. Adopting more neutral language such as “disclosure of relevant relationships” may encourage more complete compliance without implying that all disclosed associations are sinister.

INVESTIGATION AND EDUCATION. Authoritative and timely investigations into allegations of misconduct are critical to ensuring that flawed findings, which because of fraud or misconduct cannot be redeemed, are formally decertified. Because journals lack the wherewithal to investigate allegations of misconduct in published research, they rely on scholars' home institutions to address such concerns. Funding agencies often play an oversight role. In cases in which the institution is unwilling, conflicted, or incapable of investigating, consequential flawed findings might linger in the literature. A more robust structural solution is needed.

Even when institutions, funders, and journals work in good faith to address misconduct and ensure the accuracy of the research record, they can be overwhelmed by the growing number of allegations, the expense and complexity of investigations, the difficulties in addressing misconduct in international and cross-disciplinary collaborations, and the emergence of technologies that make it easier both to commit misconduct and to detect it. The investigating unit should be respected, neutral, nimble, have wide access to necessary expertise, and be capable of responding regardless of the funding source.

Ensuring that the integrity of science is protected is the responsibility of many stakeholders. In 1992, the NAS called for an independent Scientific Integrity Advisory Board to exercise leadership in addressing ethical issues in research conduct (9). Although not implemented, if such an entity were instituted, it could serve as a respected and neutral resource that supports and complements efforts of the research enterprise and its key stakeholders.

Universities should insist that their faculties and students are schooled in the ethics of research, their publications feature neither honorific nor ghost authors, their public information offices avoid hype in publicizing findings, and suspect research is promptly and thoroughly investigated. All researchers need to realize that the best scientific practice is produced when, like Darwin, they persistently search for flaws in their arguments. Because inherent variability in biological systems makes it possible for researchers to explore different sets of conditions until the expected (and rewarded) result is obtained, the need for vigilant self-critique may be especially great in research with direct application to human disease. We encourage each branch of science to invest in case studies identifying what went wrong in a selected subset of nonreproducible publications—enlisting social scientists and experts in the respective fields to interview those who were involved (and perhaps examining lab notebooks or redoing statistical analyses), with the hope of deriving general principles for improving science in each field.

Industry should publish its failed efforts to reproduce scientific findings and join scientists in the academy in making the case for the importance of scientific work. Scientific associations should continue to communicate science as a way of knowing, and educate their members in ways to more effectively communicate key scientific findings to broader publics. Journals should continue to ask for higher standards of transparency and reproducibility.

We recognize that incentives can backfire. Still, because those such as enhanced social image and forms of public recognition (10, 11) can increase productive social behavior (12), we believe that replacing the stigma of retraction with language that lauds reporting of unintended errors in a publication will increase that behavior. Because sustaining a good reputation can incentivize cooperative behavior (13), we anticipate that our proposed changes in the review process will not only increase the quality of the final product but also expose efforts to sabotage independent review. To ensure that such incentives not only advance our objectives but above all do no harm, we urge that each be scrutinized and evaluated before being broadly implemented.

Will past be prologue? If science is to enhance its capacities to improve our understanding of ourselves and our world, protect the hard-earned trust and esteem in which society holds it, and preserve its role as a driver of our economy, scientists must safeguard its rigor and reliability in the face of challenges posed by a research ecosystem that is evolving in dramatic and sometimes unsettling ways. To do this, the scientific research community needs to be involved in an ongoing dialogue. We hope that this essay and the report The Integrity of Science (14), forthcoming in 2015, will serve as catalysts for such a dialogue.

Asked at the close of the U.S. Constitutional Convention of 1787 whether the deliberations had produced a republic or a monarchy, Benjamin Franklin said “A Republic, if you can keep it.” Just as preserving a system of government requires ongoing dedication and vigilance, so too does protecting the integrity of science.

References and Notes

View Abstract

Stay Connected to Science

Navigate This Article