prop_stronger: Estimate proportion of population effect sizes above or below...

Description Usage Arguments Details Value References Examples

View source: R/functions.R

Description

Estimates the proportion of true (i.e., population parameter) effect sizes in a meta-analysis that are above or below a specified threshold of scientific importance based on the parametric methods described in Mathur & VanderWeele (2018), the nonparametric calibrated methods described in Mathur & VanderWeele (2020b), and the cluster-bootstrapping methods described in Mathur & VanderWeele (2020c).

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
prop_stronger(
  q,
  M = NA,
  t2 = NA,
  se.M = NA,
  se.t2 = NA,
  ci.level = 0.95,
  tail = NA,
  estimate.method = "calibrated",
  ci.method = "calibrated",
  calib.est.method = "DL",
  dat = NULL,
  R = 2000,
  bootstrap = "ifneeded",
  yi.name = "yi",
  vi.name = "vi",
  cluster.name = NA
)

Arguments

q

Population effect size that is the threshold for "scientific importance"

M

Pooled point estimate from meta-analysis (required only for parametric estimation/inference and for Shapiro p-value)

t2

Estimated heterogeneity (tau^2) from meta-analysis (required only for parametric estimation/inference and for Shapiro p-value)

se.M

Estimated standard error of pooled point estimate from meta-analysis (required only for parametric inference)

se.t2

Estimated standard error of tau^2 from meta-analysis (required only for parametric inference)

ci.level

Confidence level as a proportion (e.g., 0.95 for a 95% confidence interval)

tail

"above" for the proportion of effects above q; "below" for the proportion of effects below q.

estimate.method

Method for point estimation of the proportion ("calibrated" or "parametric"). See Details.

ci.method

Method for confidence interval estimation ("calibrated", "parametric", or "sign.test"). See Details.

calib.est.method

Method for estimating the mean and variance of the population effects when computing calibrated estimates. See Details.

dat

Dataset of point estimates (with names equal to the passed yi.name) and their variances (with names equal to the passed vi.name). Not required if using ci.method = "parametric" and bootstrapping is not needed.

R

Number of bootstrap or simulation iterates (depending on the methods chosen). Not required if using ci.method = "parametric" and bootstrapping is not needed.

bootstrap

Argument only used when ci.method = "parametric" (because otherwise the bootstrap is always used). In that case, if bootstrap = "ifneeded", bootstraps if estimated proportion is less than 0.15 or more than 0.85. If equal to "never", instead does not return inference in the above edge cases.

yi.name

Name of the variable in dat containing the study-level point estimates. Used for bootstrapping and conducting Shapiro test.

vi.name

Name of the variable in dat containing the study-level variances. Used for bootstrapping and conducting Shapiro test.

cluster.name

Name of the variable in dat identifying clusters of studies. If left NA, assumes studies are independent (i.e., each study is its own cluster).

Details

These methods perform well only in meta-analyses with at least 10 studies; we do not recommend reporting them in smaller meta-analyses. By default, prop_stronger performs estimation using a "calibrated" method (Mathur & VanderWeele, 2020b; 2020c) that extends work by Wang et al. (2019). This method makes no assumptions about the distribution of population effects, performs well in meta-analyses with as few as 10 studies, and can accommodate clustering of the studies (e.g., when articles contributed multiple studies on similar populations). Calculating the calibrated estimates involves first estimating the meta-analytic mean and variance, which, by default, is done using the moments-based Dersimonian-Laird estimator as in Wang et al. (2019). To use a different method, which will be passed to metafor::rma.uni, change the argument calib.est.method based on the documentation for metafor::rma.uni. For inference, the calibrated method uses bias-corrected and accelerated bootstrapping that will account for clustered point estimates if the argument cluster.name is specified (Mathur & VanderWeele, 2020c). The bootstrapping may fail to converge for some small meta-analyses for which the threshold is distant from the mean of the population effects. In these cases, you can try choosing a threshold closer to the pooled point estimate of your meta-analysis. The mean of the bootstrap estimates of the proportion is returned as a diagnostic for potential bias in the estimated proportion.

The parametric method assumes that the population effects are approximately normal, that the number of studies is large, and that the studies are independent. When these conditions hold and the proportion being estimated is not extreme (between 0.15 and 0.85), the parametric method may be more precise than the calibrated method. to improve precision. When using the parametric method and the estimated proportion is less than 0.15 or more than 0.85, it is best to bootstrap the confidence interval using the bias-corrected and accelerated (BCa) method (Mathur & VanderWeele, 2018); this is the default behavior of prop_stronger. Sometimes BCa confidence interval estimation fails, in which case prop_stronger instead uses the percentile method, issuing a warning if this is the case (but note that the percentile method should not be used when bootstrapping the calibrated estimates rather than the parametric estimates). We use a modified "safe" version of the boot package code for bootstrapping such that if any bootstrap iterates fail (usually because of model estimation problems), the error message is printed but the bootstrap iterate is simply discarded so that confidence interval estimation can proceed. As above, the mean of the bootstrapped estimates of the proportion is returned as a diagnostic for potential bias in the estimated proportion.

The sign test method (Mathur & VanderWeele, 2020b) is an extension of work by Wang et al. (2010). This method was included in Mathur & VanderWeele's (2020b) simulation study; it performed adequately when there was high heterogeneity, but did not perform well with lower heterogeneity. However, in the absence of a clear criterion for how much heterogeneity is enough for the method to perform well, we do not in general recommend its use. Additionally, this method requires effects that are reasonably symmetric and unimodal.

Value

Returns a dataframe containing the point estimate for the proportion (est), its estimated standard error (se), lower and upper confidence interval limits (lo and hi), and, depending on the user's specifications, the mean of the bootstrap estimates of the proportion (bt.mn) and the p-value for a Shapiro test for normality conducted on the standardized point estimates (shapiro.pval).

References

Mathur MB & VanderWeele TJ (2018). New metrics for meta-analyses of heterogeneous effects. Statistics in Medicine.

Mathur MB & VanderWeele TJ (2020a).New statistical metrics for multisite replication projects. Journal of the Royal Statistical Society: Series A.

Mathur MB & VanderWeele TJ (2020b). Robust metrics and sensitivity analyses for meta-analyses of heterogeneous effects. Epidemiology.

Mathur MB & VanderWeele TJ (2020c). Meta-regression methods to characterize evidence strength using meaningful-effect percentages conditional on study characteristics. Preprint available: https://osf.io/bmtdq.

Wang R, Tian L, Cai T, & Wei LJ (2010). Nonparametric inference procedure for percentiles of the random effects distribution in meta-analysis. Annals of Applied Statistics.

Wang C-C & Lee W-C (2019). A simple method to estimate prediction intervals and predictive distributions: Summarizing meta-analyses beyond means and confidence intervals. Research Synthesis Methods.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
##### Example 1: BCG Vaccine and Tuberculosis Meta-Analysis #####

# calculate effect sizes for example dataset
d = metafor::escalc(measure="RR", ai=tpos, bi=tneg,
                    ci=cpos, di=cneg, data=metadat::dat.bcg)

# fit random-effects model
# note that metafor package returns on the log scale
m = metafor::rma.uni(yi= d$yi, vi=d$vi, knha=TRUE,
                     measure="RR", method="REML" )

# pooled point estimate (RR scale)
exp(m$b)

# estimate the proportion of effects stronger than RR = 0.70
# as recommended, use the calibrated approach for both point estimation and CI
# bootstrap reps should be higher in practice (e.g., 1000)
# here using fewer for speed
prop_stronger( q = log(0.7),
               tail = "below",
               estimate.method = "calibrated",
               ci.method = "calibrated",
               dat = d,
               yi.name = "yi",
               vi.name = "vi",
               R = 100)
# warning goes away with more bootstrap iterates
# no Shapiro p-value because we haven't provided the dataset and its variable names

# now use the parametric approach (Mathur & VanderWeele 2018)
# no bootstrapping will be needed for this choice of q
prop_stronger( q = log(0.7),
               M = as.numeric(m$b),
               t2 = m$tau2,
               se.M = as.numeric(m$vb),
               se.t2 = m$se.tau2,
               tail = "below",
               estimate.method = "parametric",
               ci.method = "parametric",
               bootstrap = "ifneeded")


##### Example 2: Meta-Analysis of Multisite Replication Studies #####

# replication estimates (Fisher's z scale) and SEs
# from moral credential example in reference #2
r.fis = c(0.303, 0.078, 0.113, -0.055, 0.056, 0.073,
          0.263, 0.056, 0.002, -0.106, 0.09, 0.024, 0.069, 0.074,
          0.107, 0.01, -0.089, -0.187, 0.265, 0.076, 0.082)

r.SE = c(0.111, 0.092, 0.156, 0.106, 0.105, 0.057,
         0.091, 0.089, 0.081, 0.1, 0.093, 0.086, 0.076,
         0.094, 0.065, 0.087, 0.108, 0.114, 0.073, 0.105, 0.04)

d = data.frame( yi = r.fis,
                vi = r.SE^2 )

# meta-analyze the replications
m = metafor::rma.uni( yi = r.fis, vi = r.SE^2, measure = "ZCOR" )

# probability of population effect above r = 0.10 = 28%
# convert threshold on r scale to Fisher's z
q = r_to_z(0.10)

# bootstrap reps should be higher in practice (e.g., 1000)
# here using only 100 for speed
prop_stronger( q = q,
               tail = "above",
               estimate.method = "calibrated",
               ci.method = "calibrated",
               dat = d,
               yi.name = "yi",
               vi.name = "vi",
               R = 100 )


# probability of population effect equally strong in opposite direction
q.star = r_to_z(-0.10)
prop_stronger( q = q.star,
               tail = "below",
               estimate.method = "calibrated",
               ci.method = "calibrated",
               dat = d,
               yi.name = "yi",
               vi.name = "vi",
               R = 100 )
# BCa fails to converge here

MetaUtility documentation built on Oct. 30, 2021, 5:07 p.m.