statcheck-package | R Documentation |
The package statcheck
can extract Null Hypothesis Significance Test
(NHST) results from articles (or plain text) and recomputes p-values to check
whether a reported NHST result is internally consistent or not.
statcheck
can be used for multiple purposes, including:
Self-checks: you can use statcheck to make sure your manuscript doesn't contain copy-paste errors or other inconsistencies before you submit it to a journal.
Peer review: editors and reviewers can use statcheck to check submitted manuscripts for statistical inconsistencies. They can ask authors for a correction or clarification before publishing a manuscript.
Research: statcheck can be used to automatically extract statistical test results from articles that can then be analyzed. You can for instance investigate whether you can predict statistical inconsistencies (see e.g., Nuijten et al., 2017 <doi:10.1525/collabra.102>), or use it to analyze p-value distributions (see e.g., Hartgerink et al., 2016 <doi:10.7717/peerj.1935>).
The most basic usage of statcheck
is to directly extract NHST results
and check for inconsistencies in a string of text. See
statcheck
for details and an example of how to do this.
Another option is to run statcheck
on an article (PDF or HTML). This
is a useful option if you want to check for inconsistencies in a single
article (e.g., as a final check before you submit it). Depending on whether
you want to check an article in HTML or PDF, you can use
checkHTML
or checkPDF
, respectively. Note: it is
recommended to check articles in HTML, as converting PDF files to plain text
sometimes results in some conversion errors.
Finally, it is possible to run statcheck
on an entire folder of
articles. This is often useful for meta-research. To do so, you can use
checkPDFdir
to check all PDF articles in a folder,
checkHTMLdir
to check all PDF articles in a folder, and
checkdir
to check both PDF and HTML articles in a folder.
It is important to note that statcheck
is not perfect. Its performance
in detecting NHST results depends on the type-setting and reporting style of
an article and can vary widely. However, statcheck
performs well in
classifying the retrieved statistics in different consistency categories. We
found that statcheck’s sensitivity (true positive rate) and specificity (true
negative rate) were high: between 85.3
respectively, depending on the assumptions and settings. The overall accuracy
of statcheck ranged from 96.2
can be found in Nuijten et al., 2017.
Details on what statcheck can and cannot do, and how to install the package and the necessary program Xpdf can be found in the online manual.
statcheck
is also available as a free, online web app at
http://statcheck.io.
Maintainer: Michele B. Nuijten m.b.nuijten@uvt.nl (ORCID)
Authors:
Sacha Epskamp mail@sachaepskamp.com (ORCID)
Other contributors:
Willem Sleegers (ORCID) [contributor]
Edoardo Costantini [contributor]
Paul van der Laken (ORCID) [contributor]
Sean Rife (ORCID) [contributor]
John Sakaluk (ORCID) [contributor]
Chris Hartgerink (ORCID) [contributor]
Steve Haroz (ORCID) [contributor]
Hartgerink, C. H. J., Van Aert, R. C. M., Nuijten, M. B., Wicherts, J. M., Van Assen, M. A. L. M. (2016). Distributions of p-values smaller than .05 in psychology: What is going on? PeerJ, 4, e1935. doi: 10.7717/peerj.1935
Nuijten, M. B., Borghuis, J., Veldkamp, C. L. S., Dominguez-Alvarez, L., Van Assen, M. A. L. M., & Wicherts, J. M. (2017). Journal data sharing policies and statistical reporting inconsistencies in psychology. Collabra: Psychology, 3(1), 1-22. doi: 10.1525/collabra.102.
Nuijten, M. B., Van Assen, M. A. L. M., Hartgerink, C. H. J., Epskamp, S., & Wicherts, J. M. (2017). The validity of the tool "statcheck" in discovering statistical reporting inconsistencies. Preprint retrieved from https://osf.io/preprints/psyarxiv/tcxaj/.
Useful links:
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.