bayesmeta: Bayesian random-effects meta-analysis

View source: R/bayesmeta.R

bayesmetaR Documentation

Bayesian random-effects meta-analysis

Description

This function allows to derive the posterior distribution of the two parameters in a random-effects meta-analysis and provides functions to evaluate joint and marginal posterior probability distributions, etc.

Usage

  bayesmeta(y, ...)
  ## Default S3 method:
bayesmeta(y, sigma, labels = names(y),
          tau.prior = "uniform",
          mu.prior = c("mean"=NA,"sd"=NA),
          mu.prior.mean = mu.prior[1], mu.prior.sd = mu.prior[2],
          interval.type = c("shortest", "central"),
          delta = 0.01, epsilon = 0.0001,
          rel.tol.integrate = 2^16*.Machine$double.eps,
          abs.tol.integrate = 0.0,
          tol.uniroot = rel.tol.integrate, ...)
  ## S3 method for class 'escalc'
bayesmeta(y, labels = NULL, ...)

Arguments

y

vector of estimates or an escalc object.

sigma

vector of standard errors associated with y.

labels

(optional) a vector of labels corresponding to y and sigma.

tau.prior

a function returning the prior density for the heterogeneity parameter (\tau) or a character string specifying one of the default ‘non-informative’ priors; possible choices for the latter case are:

  • "uniform": a uniform prior in \tau

  • "sqrt": a uniform prior in \sqrt{\tau}

  • "Jeffreys": the Jeffreys prior for \tau

  • "BergerDeely": the prior due to Berger and Deely (1988)

  • "conventional": the conventional prior

  • "DuMouchel": the DuMouchel prior

  • "shrinkage": the ‘uniform shrinkage’ prior

  • "I2": a uniform prior on the ‘relative heterogeneity’ I^2

The default is "uniform" (which should be used with caution; see remarks below). The above priors are described in some more detail below.

mu.prior

the mean and standard deviation of the normal prior distribution for the effect \mu. If unspecified, an (improper) uniform prior is used.

mu.prior.mean, mu.prior.sd

alternative parameters to specify the prior distribution for the effect \mu (see above).

interval.type

the type of (credible, prediction, shrinkage) interval to be returned by default; either "shortest" for shortest intervals, or "central" for central, equal-tailed intervals.

delta, epsilon

the parameters specifying the desired accuracy for approximation of the \mu posterior(s), and with that determining the number of \tau support points being used internally. Smaller values imply greater accuracy and greater computational burden (Roever and Friede, 2017).

rel.tol.integrate, abs.tol.integrate, tol.uniroot

the rel.tol, abs.tol and tol ‘accuracy’ arguments that are passed to the integrate() or uniroot() functions for internal numerical integration or root finding (see also the help there).

...

other bayesmeta arguments.

Details

The common random-effects meta-analysis model may be stated as

y_i|\mu,\sigma_i,\tau \;\sim\; \mathrm{Normal}(\mu, \, \sigma_i^2 + \tau^2)

where the data (y, \sigma) enter as y_i, the i-th estimate, that is associated with standard error \sigma_i > 0, and i=1,...,k. The model includes two unknown parameters, namely the (mean) effect \mu, and the heterogeneity \tau. Alternatively, the model may also be formulated via an intermediate step as

y_i|\theta_i,\sigma_i \;\sim\; \mathrm{Normal}(\theta_i, \, \sigma_i^2),

\theta_i|\mu,\tau \;\sim\; \mathrm{Normal}(\mu, \, \tau^2),

where the \theta_i denote the trial-specific means that are then measured through the estimate y_i with an associated measurement uncertainty \sigma_i. The \theta_i again differ from trial to trial and are distributed around a common mean \mu with standard deviation \tau.

The bayesmeta() function utilizes the fact that the joint posterior distribution p(\mu, \tau | y, \sigma) may be partly analytically integrated out to determine the integrals necessary for coherent Bayesian inference on one or both of the parameters.

As long as we assume that the prior probability distribution may be factored into independent marginals p(\mu,\tau)=p(\mu)\times p(\tau) and either an (improper) uniform or a normal prior is used for the effect \mu, the joint likelihood p(y|\mu,\tau) may be analytically marginalized over \mu, yielding the marginal likelihood function p(y|\tau). Using any prior p(\tau) for the heterogeneity, the 1-dimensional marginal posterior p(\tau|y) \propto p(y|\tau)\times p(\tau) may then be treated numerically. As the conditional posterior p(\mu|\tau,y) is a normal distribution, inference on the remaining joint (p(\mu,\tau|y)) or marginal (p(\mu|y)) posterior may be approached numerically (using the DIRECT algorithm) as well. Accuracy of the computations is determined by the delta (maximum divergence \delta) and epsilon (tail probability \epsilon) parameters (Roever and Friede, 2017).

What constitutes a sensible prior for both \mu and \tau depends (as usual) very much on the context. Potential candidates for informative (or weakly informative) heterogeneity (\tau) priors may include the half-normal, half-Student-t, half-Cauchy, exponential, or Lomax distributions; we recommend the use of heavy-tailed prior distributions if in case of prior/data conflict the prior is supposed to be discounted (O'Hagan and Pericchi, 2012). A sensible informative prior might also be a log-normal or a scaled inverse \chi^2 distribution. One might argue that an uninformative prior for \tau may be uniform or monotonically decreasing in \tau. Another option is to use the Jeffreys prior (see the tau.prior argument above). The Jeffreys prior implemented here is the variant also described by Tibshirani (1989) that results from fixing the location parameter (\mu) and considering the Fisher information element corresponding to the heterogeneity \tau only. This prior also constitutes the Berger/Bernardo reference prior for the present problem (Bodnar et al., 2016). The uniform shrinkage and DuMouchel priors are described in Spiegelhalter et al. (2004, Sec. 5.7.3). The procedure is able to handle improper priors (and does so by default), but of course the usual care must be taken here, as the resulting posterior may or may not be a proper probability distribution.

Note that a wide range of different types of endpoints may be treated on a continuous scale after an appropriate transformation; for example, count data may be handled by considering corresponding logarithmic odds ratios. Many such transformations are implemented in the metafor package's escalc function and may be directly used as an input to the bayesmeta() function; see also the example below. Alternatively, the compute.es package also provides a range of effect sizes to be computed from different data types.

The bayesmeta() function eventually generates some basic summary statistics, but most importantly it provides an object containing a range of functions allowing to evaluate posterior distributions; this includes joint and marginal posteriors, prior and likelihood, predictive distributions, densities, cumulative distributions and quantile functions. For more details see also the documentation and examples below. Use of the individual argument allows to access posteriors of study-specific (shrinkage-) effects (\theta_i). The predict argument may be used to access the predictive distribution of a future study's effect (\theta_{k+1}), facilitating a meta-analytic-predictive (MAP) approach (Neuenschwander et al., 2010).

Prior specification details

When specifying the tau.prior argument as a character string (and not as a prior density function), then the actual prior probability density functions corresponding to the possible choices of the tau.prior argument are given by:

  • "uniform" - the (improper) uniform prior in \tau:

    p(\tau) \;\propto\; 1

  • "sqrt" - the (improper) uniform prior in \sqrt{\tau}:

    p(\tau) \;\propto\; \tau^{-1/2} \;=\; \frac{1}{\sqrt{\tau}}

  • "Jeffreys" - Tibshirani's noninformative prior, a variation of the Jeffreys prior, which here also constitutes the Berger/Bernardo reference prior for \tau:

    p(\tau) \;\propto\; \sqrt{\sum_{i=1}^k\Bigl(\frac{\tau}{\sigma_i^2+\tau^2}\Bigr)^2}

    (This is also an improper prior whose density does not integrate to 1).

  • "BergerDeely" - the (improper) Berger/Deely prior:

    p(\tau) \;\propto\; \prod_{i=1}^k \Bigl(\frac{\tau}{\sigma_i^2+\tau^2}\Bigr)^{1/k}

    This is a variation of the above Jeffreys prior, and both are equal in case all standard errors (\sigma_i) are the same.

  • "conventional" - the (proper) conventional prior:

    p(\tau) \;\propto\; \prod_{i=1}^k \biggl(\frac{\tau}{(\sigma_i^2+\tau^2)^{3/2}}\biggr)^{1/k}

    This is a proper variation of the above Berger/Deely prior intended especially for testing and model selection purposes.

  • "DuMouchel" - the (proper) DuMouchel prior:

    p(\tau) \;=\; \frac{s_0}{(s_0+\tau)^2}

    where s_0=\sqrt{s_0^2} and s_0^2 again is the harmonic mean of the standard errors (as above).

  • "shrinkage" - the (proper) uniform shrinkage prior:

    p(\tau) \;=\; \frac{2 s_0^2 \tau}{(s_0^2+\tau^2)^2}

    where s_0^2=\frac{k}{\sum_{i=1}^k \sigma_i^{-2}} is the harmonic mean of the squared standard errors \sigma_i^2.

  • "I2" - the (proper) uniform prior in I^2:

    p(\tau) \;=\; \frac{2 \hat{\sigma}^2 \tau}{(\hat{\sigma}^2 + \tau^2)^2}

    where \hat{\sigma}^2 = \frac{(k-1)\; \sum_{i=1}^k\sigma_i^{-2}}{\bigl(\sum_{i=1}^k\sigma_i^{-2}\bigr)^2 - \sum_{i=1}^k\sigma_i^{-4}}. This prior is similar to the uniform shrinkage prior, except for the use of \hat{\sigma}^2 instead of s_0^2.

For empirically motivated informative heterogeneity priors see also the TurnerEtAlPrior() and RhodesEtAlPrior() functions. For more general information especially on (weakly) informative heterogeneity prior distributions, see Roever et al. (2020).

Credible intervals

Credible intervals (as well as prediction and shrinkage intervals) may be determined in different ways. By default, shortest intervals are returned, which for unimodal posteriors (the usual case) is equivalent to the highest posterior density region (Gelman et al., 1997, Sec. 2.3). Alternatively, central (equal-tailed) intervals may also be derived. The default behaviour may be controlled via the interval.type argument, or also by using the method argument with each individual call of the $post.interval() function (see below). A third option, although not available for prediction or shrinkage intervals, and hence not as an overall default choice, but only for the $post.interval() function, is to determine the evidentiary credible interval, which has the advantage of being parameterization invariant (Shalloway, 2014).

Bayes factors

Bayes factors (Kass and Raftery, 1995) for the two hypotheses of \tau=0 and \mu=0 are provided in the $bayesfactor element; low or high values here constitute evidence against or in favour of the hypotheses, respectively. Bayes factors are based on marginal likelihoods and can only be computed if the priors for heterogeneity and effect are proper. Bayes factors for other hypotheses can be computed using the marginal likelihood (as provided through the $marginal element) and the $likelihood() function. Bayes factors must be interpreted with very much caution, as they are susceptible to Lindley's paradox (Lindley, 1957), which especially implies that variations of the prior specifications that have only minuscule effects on the posterior distribution may have a substantial impact on Bayes factors (via the marginal likelihood). For more details on the problems and challenges related to Bayes factors see also Gelman et al. (1997, Sec. 7.4).

Besides the ‘actual’ Bayes factors, minimum Bayes factors are also provided (Spiegelhalter et al., 2004; Sec. 4.4). The minimum Bayes factor for the hypothesis of \mu=0 constitutes the minimum across all possible priors for \mu and hence gives a measure of how much (or how little) evidence against the hypothesis is provided by the data at most. It is independent of the particular effect prior used in the analysis, but still dependent on the heterogeneity prior. Analogously, the same is true for the heterogeneity's minimum Bayes factor. A minimum Bayes factor can also be computed when only one of the priors is proper.

Numerical accuracy

Accuracy of the numerical results is determined by four parameters, namely, the accuracy of numerical integration as specified through the rel.tol.integrate and abs.tol.integrate arguments (which are internally passed on to the integrate function), and the accuracy of the grid approximation used for integrating out the heterogeneity as specified through the delta and epsilon arguments (Roever and Friede, 2017). As these may also heavily impact on the computation time, be careful when changing these from their default values. You can monitor the effect of different settings by checking the number and range of support points returned in the $support element.

Study weights

Conditional on a given \tau value, the posterior expectations of the overall effect (\mu) as well as the shrinkage estimates (\theta_i) result as convex combinations of the estimates y_i. The weights associated with each estimate y_i are commonly quoted in frequentist meta-analysis results in order to quantify (arguably somewhat heuristically) each study's contribution to the overall estimates, often in terms of percentages.

In a Bayesian meta-analysis, these numbers to not immediately arise, since the heterogeneity is marginalized over. However, due to linearity, the posterior mean effects may still be expressed in terms of linear combinations of initial estimates y_i, with weights now given by the posterior mean weights, marginalized over the heterogeneity \tau (Roever and Friede, 2020). The posterior mean weights are returned in the $weights and $weights.theta elements, for the overall effect \mu as well as for the shrinkage estimates \theta_i.

Value

A list of class bayesmeta containing the following elements:

y

vector of estimates (the input data).

sigma

vector of standard errors corresponding to y (input data).

labels

vector of labels corresponding to y and sigma.

k

number of data points (in y).

tau.prior

the prior probability density function for \tau.

mu.prior.mean

the prior mean of \mu.

mu.prior.sd

the prior standard deviation of \mu.

dprior

a function(tau=NA, mu=NA, log=FALSE) of two parameters, tau and/or mu, returning either the joint or marginal prior probability density, depending on which parameter(s) is/are provided.

tau.prior.proper

a logical flag indicating whether the heterogeneity prior appears to be proper (which is judged based on an attempted numerical integration of the density function).

likelihood

a function(tau=NA, mu=NA, log=FALSE) of two parameters, tau and/or mu, returning either the joint or marginal likelihood, depending on which parameter(s) is/are provided.

dposterior

a function(tau=NA, mu=NA, theta=mu, log=FALSE, predict=FALSE, individual=FALSE) of two parameters, tau and/or mu, returning either the joint or marginal posterior probability density, depending on which parameter(s) is/are provided. Using the argument predict=TRUE yields the posterior predictive distribution for \theta. Using the individual argument, you can request individual effects' (\theta_i) posterior distributions. May be an integer number (1,...,k) giving the index, or a character string giving the label.

pposterior

a function(tau=NA, mu=NA, theta=mu, predict=FALSE, individual=FALSE) of one parameter (either tau or mu) returning the corresponding marginal posterior cumulative distribution function. Using the argument predict=TRUE yields the posterior predictive distribution for \theta. Using the individual argument, you can request individual effects' (\theta_i) posterior distributions. May be an integer number (1,...,k) giving the index, or a character string giving the label.

qposterior

a function(tau.p=NA, mu.p=NA, theta.p=mu.p, predict=FALSE, individual=FALSE) of one parameter (either tau.p or mu.p) returning the corresponding marginal posterior quantile function. Using the argument predict=TRUE yields the posterior predictive distribution for \theta. Using the individual argument, you can request individual effects' (\theta_i) posterior distributions. May be an integer number (1,...,k) giving the index, or a character string giving the label.

rposterior

a function(n=1, predict=FALSE, individual=FALSE, tau.sample=TRUE) generating n independent random draws from the (2-dimensional) posterior distribution. Using the argument predict=TRUE yields the posterior predictive distribution for \theta. Using the individual argument, you can request individual effects' (\theta_i) posterior distributions. May be an integer number (1,...,k) giving the index, or a character string giving the label. In general, this via the inversion method, so it is rather slow. However, if one is not interested in sampling the heterogeneity parameter (\tau), using ‘tau.sample=FALSE’ will speed up the function substantially.

post.interval

a function(tau.level=NA, mu.level=NA, theta.level=mu.level, method=c("shortest","central","evidentiary"), predict=FALSE, individual=FALSE) returning a credible interval, depending on which of the two parameters is provided (either tau.level or mu.level). The additional parameter method may be used to specify the desired type of interval: method = "shortest" returns the shortest interval, method = "central" returns a central interval, and method = "evidentiary" returns an evidentiary interval (Shalloway, 2014); the former is the default option. Using the argument predict=TRUE yields a posterior predictive interval for \theta. Using the individual argument, you can request individual effects' (\theta_i) posterior distributions. May be an integer number (1,...,k) giving the index, or a character string giving the label.

cond.moment

a function(tau, predict=FALSE, individual=FALSE, simplify=TRUE) returning conditional moments (mean and standard deviation) of \mu as a function of \tau. Using the argument predict=TRUE yields (conditional) posterior predictive moments for \theta. Using the individual argument, you can request individual effects' (\theta_i) posterior distributions. May be an integer number (1,...,k) giving the index, or a character string giving the label.

I2

a function(tau) returning the ‘relative’ heterogeneity I^2 as a function of \tau.

summary

a matrix listing some summary statistics, namely marginal posterior mode, median, mean, standard deviation and a (shortest) 95% credible intervals, of the marginal posterior distributions of \tau and \mu, and of the posterior predictive distribution of \theta.

interval.type

the interval.type input argument specifying the type of interval to be returned by default.

ML

a matrix giving joint and marginal maximum-likelihood estimates of (\tau,\mu).

MAP

a matrix giving joint and marginal maximum-a-posteriori estimates of (\tau,\mu).

theta

a matrix giving the ‘shrinkage estimates’, i.e, summary statistics of the trial-specific means \theta_i.

weights

a vector giving the posterior expected inverse-variance weights for each study (and for the effect prior mean, if the effect prior was proper).

weights.theta

a matrix whose columns give the posterior expected weights of each study (and of the effect prior mean, if the effect prior was proper) for all shrinkage estimates.

marginal.likelihood

the marginal likelihood of the data (this number is only computed if proper effect and heterogeneity priors are specified).

bayesfactor

Bayes factors and minimum Bayes factors for the two hypotheses of \tau=0 and \mu=0. These depend on the marginal likelihood and hence can only be computed if proper effect and/or heterogeneity priors are specified; see also remark above.

support

a matrix giving the \tau support points used internally in the grid approximation, along with their associated weights, conditional mean and standard deviation of \mu, and the standard deviation of the (conditional) predictive distribution of \theta.

delta, epsilon

the ‘delta’ and ‘epsilon’ input parameter determining numerical accuracy.

rel.tol.integrate, abs.tol.integrate, tol.uniroot

the input parameters determining the numerical accuracy of the internally used integrate() and uniroot() functions.

call

an object of class call giving the function call that generated the bayesmeta object.

init.time

the computation time (in seconds) used to generate the bayesmeta object.

Author(s)

Christian Roever christian.roever@med.uni-goettingen.de

References

C. Roever. Bayesian random-effects meta-analysis using the bayesmeta R package. Journal of Statistical Software, 93(6):1-51, 2020. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.18637/jss.v093.i06")}.

C. Roever, R. Bender, S. Dias, C.H. Schmid, H. Schmidli, S. Sturtz, S. Weber, T. Friede. On weakly informative prior distributions for the heterogeneity parameter in Bayesian random-effects meta-analysis. Research Synthesis Methods, 12(4):448-474, 2021. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1002/jrsm.1475")}.

C. Roever, T. Friede. Discrete approximation of a mixture distribution via restricted divergence. Journal of Computational and Graphical Statistics, 26(1):217-222, 2017. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1080/10618600.2016.1276840")}.

C. Roever, T. Friede. Bounds for the weight of external data in shrinkage estimation. Biometrical Journal, 65(5):1131-1143, 2021. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1002/bimj.202000227")}.

J.O. Berger, J. Deely. A Bayesian approach to ranking and selection of related means with alternatives to analysis-of-variance methodology. Journal of the American Statistical Association, 83(402):364-373, 1988. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1080/01621459.1988.10478606")}.

O. Bodnar, A. Link, C. Elster. Objective Bayesian inference for a generalized marginal random effects model. Bayesian Analysis, 11(1):25-45, 2016. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1214/14-BA933")}.

A. Gelman, J.B. Carlin, H.S. Stern, D.B. Rubin. Bayesian data analysis. Chapman & Hall / CRC, Boca Raton, 1997.

A. Gelman. Prior distributions for variance parameters in hierarchical models. Bayesian Analysis, 1(3):515-534, 2006. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1214/06-BA117A")}.

J. Hartung, G. Knapp, B.K. Sinha. Statistical meta-analysis with applications. Wiley, Hoboken, 2008.

L.V. Hedges, I. Olkin. Statistical methods for meta-analysis. Academic Press, San Diego, 1985

A.E. Kass, R.E. Raftery. Bayes factors. Journal of the American Statistical Association, 90(430):773-795, 1995. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.2307/2291091")}.

D.V. Lindley. A statistical paradox. Biometrika, 44(1/2):187-192, 1957. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1093/biomet/44.1-2.187")}.

B. Neuenschwander, G. Capkun-Niggli, M. Branson, D.J. Spiegelhalter. Summarizing historical information on controls in clinical trials. Trials, 7(1):5-18, 2010. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1177/1740774509356002")}.

A. O'Hagan, L. Pericchi. Bayesian heavy-tailed models and conflict resolution: A review. Brazilian Journal of Probability and Statistics, 26(4):372-401, 2012. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1214/11-BJPS164")}.

S. Shalloway. The evidentiary credible region. Bayesian Analysis, 9(4):909-922, 2014. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1214/14-BA883")}.

D.J. Spiegelhalter, K.R. Abrams, J.P.Myles. Bayesian approaches to clinical trials and health-care evaluation. Wiley & Sons, 2004.

R. Tibshirani. Noninformative priors for one parameter of many. Biometrika, 76(3):604-608, 1989. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1093/biomet/76.3.604")}.

W. Viechtbauer. Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3):1-48, 2010. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.18637/jss.v036.i03")}.

See Also

forestplot.bayesmeta, plot.bayesmeta, escalc, bmr, compute.es.

Examples

########################################
# example data by Snedecor and Cochran:
data("SnedecorCochran")

## Not run: 
# analysis using improper uniform prior
# (may take a few seconds to compute!):
ma01 <- bayesmeta(y=SnedecorCochran[,"mean"], sigma=sqrt(SnedecorCochran[,"var"]),
                  label=SnedecorCochran[,"no"])

# analysis using an informative prior
# (normal for mu and half-Cauchy for tau (scale=10))
# (may take a few seconds to compute!):
ma02 <- bayesmeta(y=SnedecorCochran[,"mean"], sigma=sqrt(SnedecorCochran[,"var"]),
                  label=SnedecorCochran[,"no"],
                  mu.prior.mean=50, mu.prior.sd=50,
                  tau.prior=function(x){return(dhalfcauchy(x, scale=10))})

# show some summary statistics:
print(ma01)
summary(ma01)

# show some plots:
forestplot(ma01)
plot(ma01)

# compare resulting marginal densities;
# the effect parameter (mu):
mu <- seq(30, 80, le=200)
plot(mu, ma02$dposterior(mu=mu), type="l", lty="dashed",
     xlab=expression("effect "*mu),
     ylab=expression("marginal posterior density"),
     main="Snedecor/Cochran example")
lines(mu, ma01$dposterior(mu=mu), lty="solid")

# the heterogeneity parameter (tau):
tau <- seq(0, 50, le=200)
plot(tau, ma02$dposterior(tau=tau), type="l", lty="dashed",
     xlab=expression("heterogeneity "*tau),
     ylab=expression("marginal posterior density"),
     main="Snedecor/Cochran example")
lines(tau, ma01$dposterior(tau=tau), lty="solid")

# compute posterior median relative heterogeneity I-squared:
ma01$I2(tau=ma01$summary["median","tau"])

# determine 90 percent upper limits on the heterogeneity tau:
ma01$qposterior(tau=0.90)
ma02$qposterior(tau=0.90)
# determine shortest 90 percent credible interval for tau:
ma01$post.interval(tau.level=0.9, method="shortest")
## End(Not run)


#####################################
# example data by Sidik and Jonkman:
data("SidikJonkman2007")
# add log-odds-ratios and corresponding standard errors:
sj <- SidikJonkman2007
sj <- cbind(sj, "log.or"=log(sj[,"lihr.events"])-log(sj[,"lihr.cases"]-sj[,"lihr.events"])
                             -log(sj[,"oihr.events"])+log(sj[,"oihr.cases"]-sj[,"oihr.events"]),
                "log.or.se"=sqrt(1/sj[,"lihr.events"] + 1/(sj[,"lihr.cases"]-sj[,"lihr.events"])
                                 + 1/sj[,"oihr.events"] + 1/(sj[,"oihr.cases"]-sj[,"oihr.events"])))

## Not run: 
# analysis using weakly informative half-normal prior
# (may take a few seconds to compute!):
ma03a <- bayesmeta(y=sj[,"log.or"], sigma=sj[,"log.or.se"],
                   label=sj[,"id.sj"],
                   tau.prior=function(t){dhalfnormal(t,scale=1)})

# alternatively: may utilize "metafor" package's "escalc()" function
# to compute log-ORs and standard errors:
require("metafor")
es <- escalc(measure="OR",
             ai=lihr.events, n1i=lihr.cases,
             ci=oihr.events, n2i=oihr.cases,
             slab=id, data=SidikJonkman2007)
# apply "bayesmeta()" function directly to "escalc" object:
ma03b <- bayesmeta(es, tau.prior=function(t){dhalfnormal(t,scale=1)})
# "ma03a" and "ma03b" should be identical:
print(ma03a$summary)
print(ma03b$summary)
# compare to metafor's (frequentist) random-effects meta-analysis:
rma03a <- rma.uni(es)
rma03b <- rma.uni(es, method="EB", knha=TRUE)
# compare mu estimates (estimate and confidence interval):
plot(ma03b, which=3)
abline(v=c(rma03a$b, rma03a$ci.lb, rma03a$ci.ub), col="red", lty=c(1,2,2))
abline(v=c(rma03b$b, rma03b$ci.lb, rma03b$ci.ub), col="green3", lty=c(1,2,2))
# compare tau estimates (estimate and confidence interval):
plot(ma03b, which=4)
abline(v=confint(rma03a)$random["tau",], col="red", lty=c(1,2,2))       
abline(v=confint(rma03b)$random["tau",], col="green3", lty=c(1,3,3))       

# show heterogeneity's posterior density:
plot(ma03a, which=4, main="Sidik/Jonkman example")

# show some numbers (mode, median and mean):
abline(v=ma03a$summary[c("mode","median","mean"),"tau"], col="blue")

# compare with Sidik and Jonkman's estimates:
sj.estimates <- sqrt(c("MM"  = 0.429,   # method of moments estimator
                       "VC"  = 0.841,   # variance component type estimator
                       "ML"  = 0.562,   # maximum likelihood estimator
                       "REML"= 0.598,   # restricted maximum likelihood estimator
                       "EB"  = 0.703,   # empirical Bayes estimator
                       "MV"  = 0.818,   # model error variance estimator
                       "MVvc"= 0.747))  # a variation of the MV estimator
abline(v=sj.estimates, col="red", lty="dashed")
## End(Not run)


###########################
# example data by Cochran:
data("Cochran1954")

## Not run: 
# analysis using improper uniform prior
# (may take a few seconds to compute!):
ma04 <- bayesmeta(y=Cochran1954[,"mean"], sigma=sqrt(Cochran1954[,"se2"]),
                  label=Cochran1954[,"observer"])

# show joint posterior density:
plot(ma04, which=2, main="Cochran example")
# show (known) true parameter value:
abline(h=161)

# pick a point estimate for tau:
tau <- ma04$summary["median","tau"]
# highlight two point hypotheses (fixed vs. random effects):
abline(v=c(0, tau), col="orange", lty="dotted", lwd=2)

# show marginal posterior density:
plot(ma04, which=3)
abline(v=161)
# show the conditional distributions of the effect mu
# at two tau values corresponding to fixed and random effects models:
cm <- ma04$cond.moment(tau=c(0,tau))
mu <- seq(130,200, le=200)
lines(mu, dnorm(mu, mean=cm[1,"mean"], sd=cm[1,"sd"]), col="orange", lwd=2)
lines(mu, dnorm(mu, mean=cm[2,"mean"], sd=cm[2,"sd"]), col="orange", lwd=2)

# determine a range of tau values:
tau <- seq(0, ma04$qposterior(tau=0.99), length=100)
# compute conditional posterior moments:
cm.overall <- ma04$cond.moment(tau=tau)
# compute study-specific conditional posterior moments:
cm.indiv <- ma04$cond.moment(tau=tau, individual=TRUE)
# show forest plot along with conditional posterior means:
par(mfrow=c(1,2))
  plot(ma04, which=1, main="Cochran 1954 example")
  matplot(tau, cm.indiv[,"mean",], type="l", lty="solid", col=1:ma04$k,
          xlim=c(0,max(tau)*1.2), xlab=expression("heterogeneity "*tau),
          ylab=expression("(conditional) shrinkage estimate E["*
                           theta[i]*"|"*list(tau, y, sigma)*"]"))
  text(rep(max(tau)*1.01, ma04$k), cm.indiv[length(tau),"mean",],
       ma04$label, col=1:ma04$k, adj=c(0,0.5))
  lines(tau, cm.overall[,"mean"], lty="dashed", lwd=2)
  text(max(tau)*1.01, cm.overall[length(tau),"mean"],
       "overall", adj=c(0,0.5))
par(mfrow=c(1,1))

# show the individual effects' posterior distributions:
theta <- seq(120, 240, le=300)
plot(range(theta), c(0,0.1), type="n", xlab=expression(theta[i]), ylab="")
for (i in 1:ma04$k) {
  # draw estimate +/- uncertainty as a Gaussian:
  lines(theta, dnorm(theta, mean=ma04$y[i], sd=ma04$sigma[i]), col=i+1, lty="dotted")
  # draw effect's posterior distribution:
  lines(theta, ma04$dposterior(theta=theta, indiv=i), col=i+1, lty="solid")
}
abline(h=0)
legend(max(theta), 0.1, legend=ma04$label, col=(1:ma04$k)+1, pch=15, xjust=1, yjust=1)

## End(Not run)

bayesmeta documentation built on July 9, 2023, 5:12 p.m.