README.md

README

CRAN
check R
tests Benchmark
Site CRAN
status

PublicationBiasBenchmark

PublicationBiasBenchmark is an R package for benchmarking publication bias correction methods through simulation studies. It provides: - Predefined data-generating mechanisms from the literature - Functions for running meta-analytic methods on simulated data - Pre-simulated datasets and pre-computed results for reproducible benchmarks - Tools for visualizing and comparing method performance

All datasets and results are hosted on OSF: https://doi.org/10.17605/OSF.IO/EXF3M

For the methodology of living synthetic benchmarks please cite:

Bartoš, F., Pawel, S., & Siepe, B. S. (2025). Living synthetic benchmarks: A neutral and cumulative framework for simulation studies. arXiv Preprint. https://doi.org/10.48550/arXiv.2510.19489

For the publication bias benchmark R package please cite:

Bartoš, F., Pawel, S., & Siepe, B. S. (2025). PublicationBiasBenchmark: Benchmark for publication bias correction methods (version 0.1.0). https://github.com/FBartos/PublicationBiasBenchmark

Overviews of the benchmark results are available as articles on the package website:

Contributor guidelines for extending the package with data-generating mechanisms, methods, and results are available at:

Illustrations of how to use the precomputed datasets, results, and measures are available at:

The rest of this file overviews the main features of the package.

Installation

Install the released CRAN version

install.packages("PublicationBiasBenchmark")

Install latest development version from GitHub

remotes::install_github("FBartos/PublicationBiasBenchmark")

Versions

Addition or modification of a method or data-generating mechanisms is always reflected in a minor version update. Minor changes to infrastructure etc are reflected in patch updates.

Usage

library(PublicationBiasBenchmark)

Simulating From Existing Data-Generating Mechanisms

# Obtain a data.frame with pre-defined conditions
dgm_conditions("Stanley2017")

# simulate the data from the second condition
df <- simulate_dgm("Stanley2017", 2)

# fit a method
run_method("RMA", df)

Using Pre-Simulated Datasets

# download the pre-simulated datasets
# (the intended location for storing the package resources needs to be specified)
PublicationBiasBenchmark.options(resources_directory = "/path/to/files")
download_dgm_datasets("no_bias")

# retrieve first repetition of first condition from the downloaded datasets
retrieve_dgm_dataset("no_bias", condition_id = 1, repetition_id = 1)

Using Pre-Computed Results

# download the pre-computed results
download_dgm_results("no_bias")

# retrieve results the first repetition of first condition of RMA from the downloaded results
retrieve_dgm_results("no_bias", method = "RMA", condition_id = 1, repetition_id = 1)

# retrieve all results across all conditions and repetitions
retrieve_dgm_results("no_bias")

Using Pre-Computed Measures

# download the pre-computed measures
download_dgm_measures("no_bias")

# retrieve measures of bias the first condition of RMA from the downloaded results
retrieve_dgm_measures("no_bias", measure = "bias", method = "RMA", condition_id = 1)

# retrieve all measures across all conditions and measures
retrieve_dgm_measures("no_bias")

Simulating From an Existing DGM With Custom Settings

# define sim setting
sim_settings <- list(
  n_studies     = 100,
  mean_effect   = 0.3,
  heterogeneity = 0.1
)

# check whether it is feasible
# (defined outside of the function - not to decrease performance during simulation)
validate_dgm_setting("no_bias", sim_settings)

# simulate the data
df <- simulate_dgm("no_bias", sim_settings)

# fit a method
run_method("RMA", df)

Key Functions

Data-Generating Mechanisms

Method Estimation And Results

Performance measures And Results

Available Data-Generating Mechanisms

See methods("dgm") for the full list:

Available Methods

See methods("method") for the full list:

Available Performance Measures

See ?measures for the full list of performance measures and their Monte Carlo standard errors/

DGM OSF Repositories

All DGMs are linked to the OSF repository (https://osf.io/exf3m/) and contain the following elements:

References

Alinaghi, N., & Reed, W. R. (2018). Meta-analysis and publication bias: How well does the FAT-PET-PEESE procedure work? *Research Synthesis Methods*, *9*(2), 285–311.
Andrews, I., & Kasy, M. (2019). Identification of and correction for publication bias. *American Economic Review*, *109*(8), 2766–2794.
Bartoš, F., Maier, M., Wagenmakers, E.-J., Doucouliagos, H., & Stanley, T. (2023). Robust bayesian meta-analysis: Model-averaging across complementary publication bias adjustment methods. *Research Synthesis Methods*, *14*(1), 99–116.
Bom, P. R., & Rachinger, H. (2019). A kinked meta-regression model for publication bias correction. *Research Synthesis Methods*, *10*(4), 497–514.
Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. *Advances in Methods and Practices in Psychological Science*, *2*(2), 115–144.
Duval, S. J., & Tweedie, R. L. (2000). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. *Biometrics*, *56*(2), 455–463.
Irsova, Z., Bom, P. R., Havranek, T., & Rachinger, H. (2025). Spurious precision in meta-analysis of observational research. *Nature Communications*, *16*, 8454.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve and effect size: Correcting for publication bias using only significant results. *Perspectives on Psychological Science*, *9*(6), 666–681.
Stanley, T. D., & Doucouliagos, H. (2014). Meta-regression approximations to reduce publication selection bias. *Research Synthesis Methods*, *5*(1), 60–78.
Stanley, T. D., & Doucouliagos, H. (2024). Harnessing the power of excess statistical significance: Weighted and iterative least squares. *Psychological Methods*, *29*(2), 407–420.
Stanley, T. D., Doucouliagos, H., & Ioannidis, J. P. (2017). Finding the power to reduce publication bias. *Statistics in Medicine*, *36*(10), 1580–1598.
van Aert, R. C. M., & van Assen, M. A. L. M. (2025). Correcting for publication bias in a meta-analysis with the p-uniform\* method. *Psychonomic Bulletin & Review*.
van Assen, M. A. L. M., van Aert, R. C. M., & Wicherts, J. M. (2015). Meta-analysis using effect size distributions of only statistically significant studies. *Psychological Methods*, *20*(3), 293–309.
Vevea, J. L., & Hedges, L. V. (1995). A general linear model for estimating effect size in the presence of publication bias. *Psychometrika*, *60*(3), 419–435.


Try the PublicationBiasBenchmark package in your browser

Any scripts or data that you put into this service are public.

PublicationBiasBenchmark documentation built on March 16, 2026, 5:07 p.m.