Research Article

Estimating the reproducibility of psychological science

See allHide authors and affiliations

Science  28 Aug 2015:
Vol. 349, Issue 6251, aac4716
DOI: 10.1126/science.aac4716

eLetters is an online forum for ongoing peer review. Submission of eLetters are open to all. eLetters are not edited, proofread, or indexed.  Please read our Terms of Service before submitting your own eLetter.

Compose eLetter

Plain text

  • Plain text
    No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson@gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Vertical Tabs

  • Toward a Credible Science
    • Dominic Guitard, PhD student, Université de Moncton
    • Other Contributors:
      • Sylvain Fiset, Full professor, Université de Moncton

    In line with the Open Science Collaboration (1), we would like to point out that the replication crisis in science is also linked to the statistical framework that has been guiding scientific advancement in the last century. Although researchers have made countless of valuable scientific contributions using the frequentist approach as dominant statistical framework, over the years, the inadequate use of p-values and confidence intervals has severely compromised the credibility of science (1–3). To improve reproducibility as well as research practice, numerous solutions, such as register reports, open access practice and increase transparency, have recently been put forward. However, none of them is sufficient to resolve one of the most fundamental problems: bias estimations. Bias estimations mostly occur when a scientific conclusion is drawn using a single set of studies. Unfortunately, the data generated by this particular set of studies are usually perceived by scientists and non-scientists as the unique source of information available to make a valid scientific conclusion.

    To stop bias estimations, we urge the scientific community to adopt a fully Bayesian framework. The use of a Bayesian framework has broad implications, such as embracing and quantifying uncertainty and integrating past research. Forty-four years ago, a similar proposition was put forward by Lindley (4). Regrettably, very little has changed since (1). Without a serious paradigm shift, we are doo...

    Show More
    Competing Interests: None declared.

Stay Connected to Science