News

The Power of Negative Thinking

Science  04 Oct 2013:
Vol. 342, Issue 6154, pp. 68-69
DOI: 10.1126/science.342.6154.68

Gaining ground in the ongoing struggle to coax researchers to share negative results.

Glenn Begley was stymied. At the drug giant Amgen, where Begley was vice-president and global head of hematology and oncology research, he was struggling to repeat an animal study of tumor growth by a respected oncologist, published in Cancer Cell. One figure stood out as particularly impressive. But it was proving stubbornly resistant to replication.

In March of 2011, Begley saw a chance to play detective. At the annual meeting of the American Association for Cancer Research in Orlando, he and a colleague invited the paper's senior author out to breakfast. Across his orange juice and oatmeal, Begley floated the question: Why couldn't his group get the same finding as the oncologist?

The oncologist, whom Begley declines to name, had an easy explanation. "He said, 'We did this experiment a dozen times, got this answer once, and that's the one we decided to publish.' "

ILLUSTRATION: DAVID PLUNKERT

Begley was aghast. "I thought I'd completely misheard him," he says, thinking back on the encounter. "It was disbelief, just disbelief."

As it turned out, the respected oncologist was in good company. A year later, Begley and Lee Ellis, a surgical oncologist at MD Anderson Cancer Center in Houston, published a commentary in Nature about lax standards in preclinical research in cancer. They shared that Amgen scientists couldn't replicate 47 of 53 landmark cancer studies in animals, including the respected oncologist's. (Last year, Begley left Amgen to become chief scientific officer for TetraLogic Pharmaceuticals, a small company based in Malvern, Pennsylvania.)

Striking as it is, Begley and Ellis's exposé is part of a pattern. In fields from clinical medicine to psychology, studies are showing that the literature is filled with papers that present results as stronger than they actually are—and rarely report negative outcomes.

What makes it so difficult to portray negative results straight up? Along with the drive to prove one's theories right, "there is a wide perception that negative studies are useless, indeed a failure," says Douglas Altman, a statistician at the University of Oxford in the United Kingdom. Journals, too, often "like big, exciting, positive findings," he says. As a result, negative results are often relegated to the dustbin, a problem commonly called the "file drawer effect." Or, by cherry-picking data or spinning the conclusions, a paper may make a negative study seem more positive. But those practices are being challenged.

In some fields, "there's increasing recognition" that leaving negative findings unpublished is "not the right thing to do," says David Allison, director of the Nutrition Obesity Research Center at the University of Alabama, Birmingham, where he studies methodologies in obesity research. Furthermore, as journals move online, where pages are limitless, and as more pledge to treat positive and negative studies equally in the review process, the venues for publishing null findings are expanding.

Still, many researchers and journals continue to cast results as a story that they believe others will want to read. They choose their words carefully, describing, say, an obesity increase as "alarming" rather than "modest," Allison suggests. Or investigators dig through mounds of data in the hunt for shiny nuggets. "Is there some gold in these hills? … We know if you sift through the data enough," Allison says, "you'll find things," even if by chance.

A matter of emphasis

Many who study negative results have focused on clinical trials, in part because biased results can directly affect patients, and because a published trial can often be compared with the original goals and protocol, which these days is often publicly available. That makes it easier to see whether a published study accentuates the positive.

Altman and clinical epidemiologist Isabelle Boutron, working together at Oxford about 5 years ago, set out to explore these effects. They and their colleagues culled 616 randomized clinical trials from a public database, all of them published in December 2006, and focused on 72 whose primary goals hadn't panned out. The group hunted for "spin" in the papers, defining it as the "use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome."

In 13 papers, the title contained spin, as did 49 abstracts and 21 results sections, Altman and Boutron reported in 2010 in the Journal of the American Medical Association. Some authors compared patients before and after treatment, instead of comparing those who got the experimental drug and those who didn't. In one case, authors contrasted the treated patients not with their comparison group in the same trial, but with a placebo group from another study to argue that the treatment was superior.

Boutron says the authors of these papers were upfront about the fact that their study's original goal hadn't been met. Still, "often they tried to spin" what had happened, says Boutron, now at Université Paris Descartes.

She acknowledges that in clinical research, negative results can be difficult to interpret. If a well-designed trial finds that a new drug eases symptoms, it probably does. But finding the opposite doesn't necessarily mean the therapy is hopeless. When a therapy's superiority isn't statistically significant, it might still help—or the difference could be due to chance. "You won't be able to provide a black-and-white answer," says Boutron, who is now struggling to write up a negative trial herself.

It's not just clinical research. So-called "soft" sciences, such as psychology, carry an even greater risk of biased reporting, according to Daniele Fanelli, who studies bias and misconduct at the University of Montreal in Canada. In August, Fanelli and epidemiologist John Ioannidis at Stanford University added an intriguing twist: They found behavioral research was more likely to claim positive results if the corresponding author was based in the United States.

To probe why researchers spin negative studies, Altman, medical statistician Paula Williamson at the University of Liverpool, and others decided to ask them. They identified 268 clinical trials in which there were suspicions of selective reporting. Investigators from 59 agreed to be interviewed.

The researchers offered diverse explanations for why they failed to report data on an outcome they'd previously pledged to examine. "It was just uninteresting and we thought it confusing so we left it out," said one in the paper by Williamson's group, published in 2011 in BMJ. Another said, "When I take a look at the data I see what best advances the story, and if you include too much data the reader doesn't get the actual important message."

At the American Journal of Clinical Nutrition (AJCN), Editor-in-Chief Dennis Bier encounters this routinely. A study may be designed to measure whether an intervention will lead to weight loss, say, but the manuscript focuses on blood pressure or cholesterol—originally identified as "secondary endpoints" but now taking on a starring role. The weight loss outcome "may occur in one line in table 3," Bier says. For the most part, "these are not people trying to be duplicitous," he says. "They're trying to get as much as they can out of their data." Still, he says that he and his fellow editors "spend an awful lot of time trying to get things out of papers," and "getting the caveats in."

Scientists rarely admit that these practices result in biased reports. Several whose work was cited in studies of bias did not respond to Science's requests for comment, or insisted that their work was statistically sound. After examining papers positing that reducing sugary drinks consumption decreases obesity, Allison, who takes funding from the soda industry, flagged several papers for "distortion of information." The senior author of one, Barry Popkin of the University of North Carolina, Chapel Hill, School of Public Health, agreed that the abstract overstated the case, suggesting that swapping water or diet drinks for calorie-heavy beverages caused weight loss when that outcome was not statistically significant. Popkin blamed the journal, AJCN, for the sentence. "The journal process is an imperfect process," he told Science. "That's what the editors wanted to put in there."

Bier agreed that the change had been urged by the journal—but only because, he said, the original version overstated the result to an even greater degree.

Telling it like it is

Forest ecologist Sean Thomas of the University of Toronto in Canada and his Ph.D. student spent 2 years traveling to and from the rainforests of Dominica testing whether, as long suspected, trees need more light the larger they grow. His results were lackluster, but not his publication strategy.

In the end, the relationship between light requirements and tree growth was flimsy, Thomas found, but as with many negative results, he couldn't say for sure that the thesis was wrong. The Ph.D. student, despairing, switched topics. "But I was bound and determined to publish," Thomas says.

"We thought about ways of positively spinning things," he confesses. Then he hit on another solution: submitting the paper to the Journal of Negative Results: Ecology & Evolutionary Biology. A group of postdocs at the University of Helsinki conceived of the journal in the early 2000s and launched it in 2004. Its publication record is paltry; it gets only two or three submissions a year. "It could hardly get any lower unless they were publishing nothing," Thomas says.

Undeterred, he sent his paper in, and it appeared in 2011. "We didn't have to package things in a way that might pull out some kind of marginal result," Thomas says. "This was an honest way of presenting the information that would also make it clear what the story was from the beginning."

Are studies that tell it like it is the exception to the rule, or the cusp of a new trend? Most agree that journals for negative results—another exists for biomedical studies—are not a comprehensive solution. Some mainstream journals are getting in on the act: A study presented last month at the International Congress on Peer Review and Biomedical Publication examined how eight medical journals had handled recently submitted papers of clinical trials and found they were just as likely to publish those with negative results as positive ones.

Allison believes that making all raw data public could minimize bias in reporting, as authors face the fact that others will parse their primary work. A more lasting solution may come from a shift in norms: embracing and sharing null findings without regret or shame. "The only way out of this," Fanelli says, is that "people report their studies saying exactly what they did."

Openness has its rewards. Although Thomas's Ph.D. student, Adam Martin, adopted a new thesis topic after the Dominican rainforest flop, as Martin dives into the job market Thomas is urging him to launch his job talk with that story. "It's an unusual enough thing," Thomas says, "that you'll stand out" in a crowded field.

Subjects

Navigate This Article