knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )
library(PublicationBiasBenchmark)
This vignette explains how to access and use the precomputed performance measures from the PublicationBiasBenchmark package.
The package provides comprehensive benchmark results for various publication bias correction methods across different data-generating mechanisms (DGMs),
allowing researchers to evaluate and compare method performance without running computationally intensive simulations themselves.
For the sake of not re-downloading the performance measures every time you re-knit this vignette, we disable evaluation of code chunks below. (To examine the output, please copy to your local R session.)
The package provides precomputed performance measures for multiple publication bias correction methods evaluated under different simulation conditions. These measures include:
The precomputed results are organized by data-generating mechanism (DGM), with each DGM representing different patterns of publication bias and meta-analytic conditions.
The package includes precomputed measures for several DGMs.
You can view the specific conditions for each DGM using the dgm_conditions() function:
# View conditions for the Stanley2017 DGM conditions <- dgm_conditions("Stanley2017") head(conditions)
Before accessing the precomputed measures, you need to download them from the package repository. The download_dgm_measures() function downloads the measures for a specified DGM:
# Download precomputed measures for the Stanley2017 DGM download_dgm_measures("Stanley2017")
The measures are downloaded to a local cache directory and are automatically available for subsequent analysis.
You only need to download them once, unless the benchmark measures were updated with a new method (in that case, you need to specify overwrite = TRUE argument).
Once downloaded, you can retrieve the precomputed measures using the retrieve_dgm_measures() function. This function offers flexible filtering options to extract exactly the data you need.
You can retrieve measures for a specific method and condition:
# Retrieve bias measures for RMA method in condition 1 retrieve_dgm_measures( dgm = "Stanley2017", measure = "bias", method = "RMA", method_setting = "default", condition_id = 1 )
The measure argument can be any of measure function names listed in the measures() documentation.
To retrieve all measures across all conditions and methods, simply omit the filtering arguments:
# Retrieve all measures across all conditions and methods df <- retrieve_dgm_measures("Stanley2017")
This returns a comprehensive data frame with columns:
method: Publication bias correction method namemethod_setting: Specific method configurationcondition_id: Simulation condition identifierbias, bias_mcse, rmse, rmse_mcse, ...: Performance measures and their Monte Carlo standard errorsYou can also filter by method name:
# Retrieve all measures for PET-PEESE method pet_peese_results <- retrieve_dgm_measures( dgm = "Stanley2017", method = "PETPEESE" )
Once you have retrieved the measures, you can create visualizations to compare method performance. Here's an example that creates a multi-panel plot comparing all methods across all conditions:
# Retrieve all measures across all conditions and methods df <- retrieve_dgm_measures("Stanley2017") # Retrieve conditions to identify null vs. alternative hypotheses conditions <- dgm_conditions("Stanley2017") # Create readable method labels df$label <- with(df, paste0(method, " (", method_setting, ")")) # Identify conditions under null hypothesis (H₀: mean effect = 0) df$H0 <- df$condition_id %in% conditions$condition_id[conditions$mean_effect == 0] # Create multi-panel visualization par(mfrow = c(3, 2)) par(mar = c(4, 10, 1, 1)) # Panel 1: Convergence rates boxplot(convergence * 100 ~ label, horizontal = TRUE, las = 1, ylab = "", ylim = c(20, 100), data = df, xlab = "Convergence (%)") # Panel 2: RMSE boxplot(rmse ~ label, horizontal = TRUE, las = 1, ylab = "", ylim = c(0, 0.6), data = df, xlab = "RMSE") # Panel 3: Bias boxplot(bias ~ label, horizontal = TRUE, las = 1, ylab = "", ylim = c(-0.25, 0.25), data = df, xlab = "Bias") abline(v = 0, lty = 3) # Reference line at zero # Panel 4: Coverage boxplot(coverage * 100 ~ label, horizontal = TRUE, las = 1, ylab = "", ylim = c(30, 100), data = df, xlab = "95% CI Coverage (%)") abline(v = 95, lty = 3) # Reference line at nominal level # Panel 5: Type I Error Rate (H₀ conditions only) boxplot(power * 100 ~ label, horizontal = TRUE, las = 1, ylab = "", ylim = c(0, 40), data = df[df$H0, ], xlab = "Type I Error Rate (%)") abline(v = 5, lty = 3) # Reference line at α = 0.05 # Panel 6: Power (H₁ conditions only) boxplot(power * 100 ~ label, horizontal = TRUE, las = 1, ylab = "", ylim = c(10, 100), data = df[!df$H0, ], xlab = "Power (%)")
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.