View source: R/boot_compare_smd.R
boot_compare_smd | R Documentation |
A function to compare standardized mean differences (SMDs) between independent studies using bootstrap methods. This function is intended to be used to compare the compatibility of original studies with replication studies (lower p-values indicating lower compatibility).
boot_compare_smd(
x1,
y1 = NULL,
x2,
y2 = NULL,
null = 0,
paired = FALSE,
alternative = c("two.sided", "less", "greater", "equivalence", "minimal.effect"),
R = 1999,
alpha = 0.05
)
x1 |
A numeric vector of data values from study 1 (first group for two-sample designs, or the only group for one-sample/paired designs). |
y1 |
An optional numeric vector of data values from study 1 (second group for two-sample designs, or second measurement for paired designs). Set to NULL for one-sample designs. |
x2 |
A numeric vector of data values from study 2 (first group for two-sample designs, or the only group for one-sample/paired designs). |
y2 |
An optional numeric vector of data values from study 2 (second group for two-sample designs, or second measurement for paired designs). Set to NULL for one-sample designs. |
null |
A number or vector indicating the null hypothesis value(s):
|
paired |
A logical indicating whether the SMD is from a paired or independent samples design. If a one-sample design, then paired should be set to TRUE. |
alternative |
A character string specifying the alternative hypothesis:
You can specify just the initial letter. |
R |
Number of bootstrap replications (default = 1999). |
alpha |
Alpha level (default = 0.05). |
This function tests for differences between standardized mean differences (SMDs) from
independent studies using bootstrap resampling methods. Unlike the compare_smd
function,
which works with summary statistics, this function works with raw data and uses
bootstrapping to estimate confidence intervals and p-values.
The function supports both paired/one-sample designs and independent samples designs:
For paired/one-sample designs (paired = TRUE
):
If y1
and y2
are provided, the function calculates differences between paired measures
If y1
and y2
are NULL, the function treats x1
and x2
as one-sample data
SMDs are calculated as Cohen's dz (mean divided by standard deviation of differences)
For independent samples designs (paired = FALSE
):
Requires x1
, y1
, x2
, and y2
(first and second groups for both studies)
If y1
and y2
are NULL, the function treats x1
and x2
as one-sample data with paired = TRUE
SMDs are calculated as Cohen's ds (mean difference divided by pooled standard deviation)
The function supports both standard hypothesis testing and equivalence/minimal effect testing:
For standard tests (two.sided, less, greater), the function tests whether the difference between SMDs differs from the null value (typically 0).
For equivalence testing ("equivalence"), it determines whether the difference falls within the specified bounds, which can be set asymmetrically.
For minimal effect testing ("minimal.effect"), it determines whether the difference falls outside the specified bounds.
When performing equivalence or minimal effect testing:
If a single value is provided for null
, symmetric bounds ±value will be used
If two values are provided for null
, they will be used as the lower and upper bounds
The bootstrap procedure follows these steps:
Calculate SMDs for both studies using the original data
Calculate the difference between SMDs and its standard error
Generate R bootstrap samples by resampling with replacement
Calculate SMDs and their difference for each bootstrap sample
Calculate test statistics for each bootstrap sample
Calculate confidence intervals using the percentile method
Compute p-values by comparing the observed test statistics to their bootstrap distributions
Note on p-value calculation: The function uses the bootstrap distribution of test statistics (z-scores) rather than the raw differences to calculate p-values. This approach is analogous to traditional hypothesis testing and estimates the probability of obtaining test statistics as extreme as those observed in the original data under repeated sampling.
A list with class "htest" containing the following components:
statistic: z-score (observed) with name "z (observed)"
p.value: The p-value for the test under the null hypothesis
conf.int: Bootstrap confidence interval for the difference in SMDs
estimate: Difference in SMD between studies
null.value: The specified hypothesized value(s) for the null hypothesis
alternative: Character string indicating the alternative hypothesis
method: Description of the SMD type and design used
df_ci: Data frame containing confidence intervals for the difference and individual SMDs
boot_res: List containing the bootstrap samples for SMDs, their difference, and test statistics
data.name: "Bootstrapped" to indicate bootstrap methods were used
call: The matched call
Other compare studies:
boot_compare_cor()
,
compare_cor()
,
compare_smd()
# Example 1: Comparing two independent samples SMDs (standard test)
set.seed(123)
# Study 1 data
x1 <- rnorm(30, mean = 0)
y1 <- rnorm(30, mean = 0.5, sd = 1)
# Study 2 data
x2 <- rnorm(25, mean = 0)
y2 <- rnorm(25, mean = 0.3, sd = 1)
# Two-sided test for independent samples (use fewer bootstraps for example)
boot_compare_smd(x1, y1, x2, y2, paired = FALSE,
alternative = "two.sided", R = 99)
# Example 2: Testing for equivalence between SMDs
# Testing if the difference between SMDs is within ±0.2
boot_compare_smd(x1, y1, x2, y2, paired = FALSE,
alternative = "equivalence", null = 0.2, R = 99)
# Example 3: Testing for minimal effects
# Testing if the difference between SMDs is outside ±0.3
boot_compare_smd(x1, y1, x2, y2, paired = FALSE,
alternative = "minimal.effect", null = 0.3, R = 99)
# Example 4: Comparing paired samples SMDs
# Study 1 data (pre-post measurements)
pre1 <- rnorm(20, mean = 10, sd = 2)
post1 <- rnorm(20, mean = 12, sd = 2)
# Study 2 data (pre-post measurements)
pre2 <- rnorm(25, mean = 10, sd = 2)
post2 <- rnorm(25, mean = 11, sd = 2)
# Comparing paired designs
boot_compare_smd(x1 = pre1, y1 = post1, x2 = pre2, y2 = post2,
paired = TRUE, alternative = "greater", R = 99)
# Example 5: Using asymmetric bounds for equivalence testing
boot_compare_smd(x1, y1, x2, y2, paired = FALSE,
alternative = "equivalence", null = c(-0.1, 0.3), R = 99)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.