News of the WeekSTEM CELLS

... And How the Problems Eluded Peer Reviewers and Editors

Science  06 Jan 2006:
Vol. 311, Issue 5757, pp. 23-24
DOI: 10.1126/science.311.5757.23

The paper landed in Science's online database on 15 March 2005, a Tuesday. Immediately, the journal's editors recognized a submission of potentially explosive importance. A group in South Korea was describing 11 embryonic stem (ES) cell lines created from the DNA of ailing patients. The advance, eagerly anticipated in the stem cell world, would be a first, and critical to using stem cells to combat disease.

Little did Science's editors, or the nine outside researchers who would examine the paper with varying degrees of scrutiny, realize just how explosive the paper would be. Today, its lead author Woo Suk Hwang stands accused of one of the boldest scientific frauds in memory. Investigators at Seoul National University (SNU), where most of the work was done, announced on 29 December that they could find no evidence of any of the 11 stem cell lines claimed in the paper. On the 10th floor of Science's offices in Washington, D.C., meanwhile, members of the editorial department are spotting problems in Hwang's 2005 paper, as well as another landmark paper from his group published in 2004.

Could Science have detected the fraud? Science's editors and many stem cell researchers believe not: The 2005 paper was positively received by its peer reviewers, upon whom Science relied heavily to determine whether the paper was worth publishing. “Peer review cannot detect [fraud] if it is artfully done,” says Donald Kennedy, Science's editor-in-chief. And the reported falsifications in the Hwang paper—image manipulation and fake DNA data—are not the sort that reviewers can easily spot.

Martin Blume, editor-in-chief of the American Physical Society and its nine physics journals, says that peer review overlooks honest errors as well as deliberate fraud. “Peer review doesn't necessarily say that a paper is right,” he notes. “It says it's worth publishing.”

That said, Science, like other high-profile journals, aggressively seeks firsts: papers that generate publicity and awe in the scientific community and beyond. The practice comes with some risks, critics say, because by definition firsts haven't been replicated. “Is the reviewing looser” on a potentially high-impact paper? asks Denis Duboule, a geneticist at the University of Geneva, Switzerland, who sits on Science's Board of Reviewing Editors. “Frankly, I don't know.” The Hwang paper was accepted 58 days after submission, slightly more swiftly than the average of 81 days.

Science has also not instituted certain policies, such as requesting that authors detail their contributions to a paper or performing independent analyses of images, that some believe might deter fraud. The latter will change in January, when certain images in papers near acceptance will be enlarged and scrutinized by Science staffers—a plan in place prior to the Hwang debacle.

After receiving the Hwang paper, Science sent it to two members of its Board of Reviewing Editors, who had 48 hours to proffer their opinions on whether it should be among the 30% of papers sent out for review. (The journal later sent the paper to four additional board members.) Science declined to identify the board members who vetted it.

Board members do not inspect a paper's data but instead look for “a mixture of novelty, originality, and trendiness,” explains Duboule. On 18 March, after receiving positive feedback from the two board members, an editor sent the paper to three stem cell experts for review. They were given a week, a fairly common time frame.

In this role, “you look at the data and do not assume it's fraud,” says one expert who told a Science reporter he reviewed the paper on condition that his name not be used. As a reviewer, he says, he sought to ensure that the scientists had identified key markers that distinguish stem cells from other cells and that the DNA “fingerprints” from the stem cells matched those from the patients. The photographs of stem cells and fingerprint data appeared to be in order, he says.

In fact, a number of the images purporting to be of distinct stem cells garnered from patient cells were neither distinct nor from patients. The cells had been extracted from fertilized embryos, the SNU committee alleged, and, in the published version now being analyzed, supposedly different colonies were duplicated or overlapping members of the same ones.

But ES cell colonies often look alike, says John Gearhart of Johns Hopkins University in Baltimore, Maryland, and “you don't really look at a photograph to say, ‘That's the same colony turned around.’” A member of Science's Board of Reviewing Editors, Gearhart declines to say whether he examined the paper prior to publication. Even knowing now about the fraud, Gearhart says the deceptions are difficult to spot with his naked eye.

The paper also displayed DNA fingerprints that it claimed were of patients' DNA and genetically matched stem cell lines. Here again, the peer reviewers were fooled. According to the SNU investigation, the analyses were performed solely on samples of the patients' DNA. Only by monitoring an ongoing experiment or analyzing the sample being tested could this deception be unveiled, says David Altshuler of the Broad Institute in Cambridge, Massachusetts, who pored over the DNA fingerprinting data after problems with the paper arose. He says he saw nothing amiss. “The whole issue would boil down to, is the stuff in this tube … from the DNA sample of the donor or the DNA sample of the stem cell line?” says Altshuler.

Duplicate deception.

Photographs purported to be of distinct stem cell lines were actually overlapping images of the same colony.

CREDIT: K. BUCKHEIT/SCIENCE

Although the flaws in the Hwang paper were especially difficult for reviewers to catch, the peer-review system is far from foolproof, its supporters concede. In 1997, editors at the British Medical Journal (BMJ) described a study in which they inserted eight errors into a short paper and asked researchers to identify the mistakes. Of the 221 who responded, “the median number spotted was two,” says Richard Smith, who edited BMJ from 1991 until 2004. “Nobody spotted more than five,” and 16% didn't find any.

Some journals have taken steps they hope will keep their pages cleaner. Beginning around 2000, the Journal of the American Medical Association (JAMA) and other major medical journals began requiring that every author detail his or her contributions to the work. “Obviously, people can lie and cheat, but they have to do it with the knowledge that their colleagues know, and that's a lot harder to do,” says JAMA Deputy Editor Drummond Rennie, who came up with the idea in 1996. “And later, they have to answer for it.”

Although this policy is mandatory at many medical journals, it's voluntary at Blume's physics journals and at Nature. Science has not adopted this approach. “If the paper is wrong and has to be retracted, then everyone takes the fall,” says Kennedy, who believes that detailing contributions can be “administratively complex,” and that perpetrators may be less than honest about their contributions.

But some scientists such as Duboule and Gearhart believe Science should require authors to describe their contributions. “There should have been some documentation” of who did what on the Hwang project, says Gearhart.

Not only might it now be easier to assign responsibility, but another benefit, says Gearhart, would also be in clarifying the role of a lead author, Gerald Schatten of the University of Pittsburgh in Pennsylvania. Lead authors are often considered responsible for the integrity of the data, and Schatten has come under heavy criticism for acting principally as an adviser to the South Korean group. The University of Pittsburgh has launched its own investigation into Schatten's role in the research.

In the aftermath of the Hwang case, editors at Science will be having “a lot of conversations about how we can improve the evaluation of manuscripts,” says Kennedy. One thing unlikely to change is the aim of high-profile journals to publish, and publicize, firsts. “You want the exciting results, and sometimes the avant-garde exciting results don't have the same amount of supporting data as something that's been repeated over and over and over again,” says Katrina Kelner, Science's deputy managing editor for life sciences. In weighing whether to publish papers such as these, “it's always a judgment call,” she says.

But studies are rarely accepted as dogma until they're replicated, says Altshuler, a distinction often lost on the general public—and sometimes other scientists—amid the hype that envelops firsts such as Hwang's paper. Says Altshuler, “A culture that wanted to see things reproduced before making a big deal out of them would probably be a healthier culture.”

Subjects

Navigate This Article