In this week's issue of Science, a sad and troubling communication appears on p. 1141. In it, Michael Lieber and his colleagues announce the retraction of a paper on the role of RNA/DNA hybrids in immunoglobulin class switch recombination. The first author of the paper has admitted altering gel records and other data. These alterations were major and invalidate the central conclusions not only of this paper but also of a second, published in another journal. Lieber discovered the problem himself upon failing to replicate the results obtained by his coauthor and reported it promptly to us and to his institution.
This is a rare experience for Science, one that invites us to explore its source and its consequences. Such deceptions generate serious collateral damage. Our editors invited a distinguished scientist in the field to write a Perspective on the paper. The author now discovers, to her embarrassment, that what she wrote was a thoughtful evaluation of a non-experiment. Scientists unknown to us relied on meaningless results, perhaps altering their own research plans as a consequence, and busy peer reviewers wasted valuable time. There is an even heavier cost: Each such case represents another depreciation of trust, not only within our community but also on the part of our public patrons.
At Science, we feel taken in and find ourselves wondering whether we did everything we could to spot the deception. Some comfort is at hand; many years ago, in commenting on some experiments alleged to demonstrate “extrasensory perception,” George Price made a wise observation. He pointed out that although science had developed robust ways of controlling chance, it had invented scant protection against fraud. A clever laboratory cook can invent data that are immune to vigilant reviewers and to any diagnostic test save repetition, the only proven scientific remedy. Thus, research fraud must be interdicted upstream, by a community convinced that the price it exacts is too high.
We need to know more about what motivates scientific misconduct and what can protect against it. Alas, the thin database on the etiology of fraud yields only a few hints. So far, the incidents have been concentrated in the life sciences, particularly basic biomedical research. As recent events in Europe demonstrate, it is an international problem. Most cases arise in unusually busy laboratories, where principal investigators have little time to participate personally in the experiments or, in some cases, even to check closely on their progress. Nothing in the record suggests that commercial profit is the primary driver; most transgressions have occurred in research universities of high rank, where prestige is the coin of the realm.
Past research misconduct cases may provide some guidance about what works as a disincentive and what does not. Politically driven persecution sends a strong message all right, but it is the wrong one. The 10-year congressional pursuit of David Baltimore's coauthor, Thereza Imanishi-Kari, tells us only that too much government power invites abuse. Bad investigative procedures, both in government and in universities, have too often produced headlines that were only put to rest by long-deferred due process. On the other hand, an institutional reputation for careful investigation and a culture favoring individual responsibility for research integrity and discouraging “honorary” authorship are essential. The importance of research productivity in academic promotion decisions exacerbates scientific competitiveness; it would help if institutional rules emphasized quality over quantity. Finally, the relatively new federal regulations defining research misconduct and establishing requirements for training students and fellows may help, although it's too early to tell.
What role should Science play? Plainly, journals, as the places for which research results are headed, have some responsibility. Although they cannot create deception-proof peer review, they can treat retractions honestly and forthrightly. They can express the community's interest in the trustworthiness of results and close their pages to transgressors. They should also praise responsible actions, especially when those carry personal costs. We do not know whether Lieber might have looked more closely at the work under way or encouraged a less stressful laboratory atmosphere. What we do know is that when the problem came to light, he did the right thing.