meta.pen: A penalization approach to random-effects meta-analysis

View source: R/meta.pen.R

meta.penR Documentation

A penalization approach to random-effects meta-analysis

Description

Performs the penalization methods introduced in Wang et al. (2022) to achieve a compromise between the common-effect and random-effects model.

Usage

meta.pen(y, s2, data, tuning.para = "tau", upp = 1, n.cand = 100, tol = 1e-10)

Arguments

y

a numeric vector or the corresponding column name in the argument data, specifying the observed effect sizes in the collected studies.

s2

a numeric vector or the corresponding column name in the argument data, specifying the within-study variances.

data

an optional data frame containing the meta-analysis dataset. If data is specified, the previous arguments, y and s2, should be specified as their corresponding column names in data.

tuning.para

a character string specifying the type of tuning parameter used in the penalization method. It should be one of "lambda" (use lambda as a tuning parameter) or "tau" (use the standard deviation as a tuning parameter). The default is "tau".

upp

a positive scalar used to control the upper bound of the range for the tuning parameter. Specifically, [0, T*upp] is used as the range of a tuning parameter. T is the upper threshold value of a tuning parameter. The default value of upp is 1.

n.cand

the total number of candidate values of the tuning parameter within the specified range. The default is 100.

tol

the desired accuracy (convergence tolerance). The default is 1e-10.

Details

Suppose a meta-analysis collects n independent studies. Let μ_{i} be the true effect size in study i (i = 1, ..., n). Each study reports an estimate of the effect size and its sample variance, denoted by y_{i} and s_{i}^{2}, respectively. These data are commonly modeled as y_{i} from N(μ_{i},s_{i}^{2}). If study-specific true effect sizes μ_{i} are assumed i.i.d. from N(μ, τ^{2}), this is the random-effects (RE) model, where μ is the overall effect size and τ^{2} is the between-study variance. If τ^{2}=0 and thus μ_{i}=μ for all studies, this implies that studies are homogeneous and the RE model is reduced to the common-effect (CE) model.

Marginally, the RE model yields y_{i} \sim N(μ,s_{i}^2+τ^{2}), and its log-likelihood is

l(μ, τ^{2}) = -\frac{1}{2} ∑_{i=1}^{n} ≤ft[\log(s_{i}^{2} + τ^{2}) + \frac{(y_{i} - μ)^2}{s_{i}^2 + τ^2}\right] + C,

where C is a constant. In the past two decades, penalization methods have been rapidly developed for variable selection in high-dimensional data analysis to control model complexity and reduce the variance of parameter estimates. Borrowing the idea from the penalization methods, we employ a penalty term on the between-study variance τ^2 when the heterogeneity is overestimated. The penalty term increases with τ^2. Specifically, we consider the following optimization problem:

(\hat{μ}(λ), \hat{τ}^2(λ)) = \min_{μ, τ^2 ≥q 0} ≤ft\{∑_{i=1}^{n} ≤ft[ \log(s_{i}^{2} + τ^{2}) + \frac{(y_{i} - μ)^2}{s_{i}^2 + τ^2}\right] + λ τ^2\right\},

where λ ≥q 0 is a tuning parameter that controls the penalty strength. Using the technique of profile likelihood by taking the target function's derivative in the above equation with respect to μ for a given τ^2. The bivariate optimization problem is reduced to a univariate minimization problem. When λ=0, the minimization problem is equivalent to minimizing the log-likelihood without penalty, so the penalized-likelihood method is identical to the conventional RE model. By contrast, it can be shown that a sufficiently large λ produces the estimated between-study variance as 0, leading to the conventional CE model.

As different tuning parameters lead to different estimates of \hat{μ}(λ) and \hat{τ}^2(λ), it is important to select the optimal λ among a set of candidate values. We perform the cross-validation process and construct a loss function of λ to measure the performance of specific λ values. The λ corresponding to the smallest loss is considered optimal. The threshold, denoted by λ_{\max}, based on the penalty function p(τ^2)=τ^2 can be calculated. For all λ > λ_{\max}, the estimated between-study variance is 0. Consequently, we select a certain number of candidate values (e.g., 100) from the range [0,λ_{\max}] for the tuning parameter. For a set of tuning parameters, the leave-one-study-out (i.e., n-fold) cross-validation is used to construct the loss function. Specifically, we use the following loss function for the penalization method by tuning λ:

\hat{L}(λ) = ≤ft[\frac{1}{n} ∑_{i=1}^{n} \frac{(y_{i} - \hat{μ}_{(-i)}(λ))^2}{s_{i}^2 + \hat{τ}_{RE(-i)}^2 + \frac{∑_{j \ne i}(s_{j}^2 + \hat{τ}_{RE(-i)}^2) / (s_{j}^2 + \hat{τ}^2_{(-i)}(λ))^2}{(∑_{j \ne i} 1 / (s_{j}^2 + \hat{τ}^2_{(-i)}(λ)))^2}}\right]^{1/2},

where the subscript (-i) indicates that study i is removed, \hat{τ}^2_{(-i)}(λ) is the estimated between-study variance for a given λ, \hat{τ}_{RE(-i)}^2 is the corresponding between-study variance estimate of the RE model, and

\hat{μ}_{(-i)}(λ) = \frac{∑_{j \ne i} y_{j} / (s_{j}^2 + \hat{τ}^2_{(-i)}(λ))}{∑_{j \ne i} 1 / (s_{j}^2 + \hat{τ}^2_{(-i)}(λ))}.

The above procedure focuses on tuning the parameter, λ, to control the penalty strength for the between-study variance. Alternatively, for the purpose of shrinking the potentially overestimated heterogeneity, we can directly treat the between-study standard deviation (SD), τ, as the tuning parameter. A set of candidate values of τ are considered, and the value that produces the minimum loss function is selected. Compared with tuning λ from the perspective of penalized likelihood, tuning τ is more straightforward and intuitive from the practical perspective. The candidate values of τ can be naturally chosen from [0,\hat{τ}_{RE}], with the lower and upper bounds corresponding to the CE and RE models, respectively. Denoted the candidate SDs as τ_{t}, the loss function with respect to τ_{t} is similarly defined as

\hat{L}(τ_{t}) = ≤ft[\frac{1}{n} ∑_{i=1}^{n} \frac{(y_i - \hat{μ}_{(-i)}(τ_{t}))^2}{s_{i}^2 + \hat{τ}_{RE(-i)} + \frac{∑_{j \ne i}(s_{j}^2 + \hat{τ}^2_{RE(-i)}) / (s_{j}^2 + τ_{t}^2)^2}{[∑_{j \ne i} 1 / (s_{j}^2 + τ_{t}^2)]^2}}\right]^{1/2},

where the overall effect size estimate (excluding study i) is

\hat{μ}_{(-i)}(τ_{t}) = \frac{∑_{j \ne i} y_j / (s_{j}^2 + τ_{t}^2)}{∑_{j \ne i} 1 / (s_{j}^2 + τ_{t}^2)}.

Value

This function returns a list containing estimates of the overall effect size and their 95% confidence intervals. Specifically, the components include:

n

the number of studies in the meta-analysis.

I2

the I^2 statistic for quantifying heterogeneity.

tau2.re

the maximum likelihood estimate of the between-study variance.

mu.fe

the estimated overall effect size of the CE model.

se.fe

the standard deviation of the overall effect size estimate of the CE model.

mu.re

the estimated overall effect size of the RE model.

se.re

the standard deviation of the overall effect size estimate of the RE model.

loss

the values of the loss function for candidate tuning parameters.

tau.cand

the candidate values of the tuning parameter τ.

lambda.cand

the candidate values of the tuning parameter λ.

tau.opt

the estimated between-study standard deviation of the penalization method.

mu.opt

the estimated overall effect size of the penalization method.

se.opt

the standard error estimate of the estimated overall effect size of the penalization method.

Author(s)

Yipeng Wang yipeng.wang1@ufl.edu

References

Wang Y, Lin L, Thompson CG, Chu H (2022). "A penalization approach to random-effects meta-analysis." Statistics in Medicine, 41(3), 500–516. <doi: 10.1002/sim.9261>

Examples

data("dat.bohren") ## log odds ratio
## perform the penaliztion method by tuning tau
out11 <- meta.pen(y, s2, dat.bohren)
## plot the loss function and candidate taus
plot(out11$tau.cand, out11$loss, xlab = NA, ylab = NA, 
  lwd = 1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8)
title(xlab = expression(paste(tau[t])),
  ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))),
  cex.lab = 0.8, line = 2)
idx <- which(out11$loss == min(out11$loss))
abline(v = out11$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2)
     
## perform the penaliztion method by tuning lambda     
out12 <- meta.pen(y, s2, dat.bohren, tuning.para = "lambda")
## plot the loss function and candidate lambdas
plot(log(out12$lambda.cand + 1), out12$loss, xlab = NA, ylab = NA, 
  lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8)
title(xlab = expression(log(lambda + 1)),
  ylab = expression(paste("Loss function ", ~hat(L)(lambda))),
  cex.lab = 0.8, line = 2)
idx <- which(out12$loss == min(out12$loss))
abline(v = log(out12$lambda.cand[idx] + 1), col = "gray", lwd = 1.5, lty = 2)


data("dat.bjelakovic") ## log odds ratio
## perform the penaliztion method by tuning tau
out21 <- meta.pen(y, s2, dat.bjelakovic)
## plot the loss function and candidate taus
plot(out21$tau.cand, out21$loss, xlab = NA, ylab = NA, 
  lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8)
title(xlab = expression(paste(tau[t])),
  ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))),
  cex.lab = 0.8, line = 2)
idx <- which(out21$loss == min(out21$loss))
abline(v = out21$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2)

out22 <- meta.pen(y, s2, dat.bjelakovic, tuning.para = "lambda")

data("dat.carless") ## log odds ratio
## perform the penaliztion method by tuning tau
out31 <- meta.pen(y, s2, dat.carless)
## plot the loss function and candidate taus
plot(out31$tau.cand, out31$loss, xlab = NA, ylab = NA, 
  lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8)
title(xlab = expression(paste(tau[t])),
  ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))),
  cex.lab = 0.8, line = 2)
idx <- which(out31$loss == min(out31$loss))
abline(v = out31$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2)

out32 <- meta.pen(y, s2, dat.carless, tuning.para = "lambda")

data("dat.adams") ## mean difference
out41 <- meta.pen(y, s2, dat.adams)
## plot the loss function and candidate taus
plot(out41$tau.cand, out41$loss, xlab = NA, ylab = NA, 
  lwd=1.5, type = "l", cex.lab = 0.8, cex.axis = 0.8)
title(xlab = expression(paste(tau[t])),
  ylab = expression(paste("Loss function ", ~hat(L)(tau[t]))),
  cex.lab = 0.8, line = 2)
idx <- which(out41$loss == min(out41$loss))
abline(v = out41$tau.cand[idx], col = "gray", lwd = 1.5, lty = 2)

out42 <- meta.pen(y, s2, dat.adams, tuning.para = "lambda")


altmeta documentation built on Aug. 29, 2022, 9:07 a.m.

Related to meta.pen in altmeta...