Description Usage Arguments Details Value Author(s) References See Also Examples
This function allows to derive the posterior distribution of the two parameters in a randomeffects metaanalysis and provides functions to evaluate joint and marginal posterior probability distributions, etc.
1 2 3 4 5 6 7 8 9 10 11 12 13  bayesmeta(y, ...)
## Default S3 method:
bayesmeta(y, sigma, labels = names(y),
tau.prior = "uniform",
mu.prior = c("mean"=NA,"sd"=NA),
mu.prior.mean = mu.prior[1], mu.prior.sd = mu.prior[2],
interval.type = c("shortest", "central"),
delta = 0.01, epsilon = 0.0001,
rel.tol.integrate = 2^16*.Machine$double.eps,
abs.tol.integrate = 0.0,
tol.uniroot = rel.tol.integrate, ...)
## S3 method for class 'escalc'
bayesmeta(y, labels = NULL, ...)

y 
vector of estimates or an 
sigma 
vector of standard errors associated with 
labels 
(optional) a vector of labels corresponding to 
tau.prior 
a
The default is 
mu.prior 
the mean and standard deviation of the normal prior distribution for the effect μ. If unspecified, an (improper) uniform prior is used. 
mu.prior.mean, mu.prior.sd 
alternative parameters to specify the prior distribution for the effect μ (see above). 
interval.type 
the type of (credible, prediction, shrinkage) interval to be
returned by default; either 
delta, epsilon 
the parameters specifying the desired accuracy for approximation of the μ posterior(s), and with that determining the number of τ support points being used internally. Smaller values imply greater accuracy and greater computational burden (Roever and Friede, 2017). 
rel.tol.integrate, abs.tol.integrate, tol.uniroot 
the 
... 
other 
The common randomeffects metaanalysis model may be stated as
y[i]  mu, sigma[i], tau ~ Normal(mu, sigma[i]^2 + tau^2)
where the data (y, σ) enter as y[i], the ith estimate, that is associated with standard error sigma[i] > 0, and i=1,...,k. The model includes two unknown parameters, namely the (mean) effect μ, and the heterogeneity τ. Alternatively, the model may also be formulated via an intermediate step as
y[i]theta[i],sigma[i] ~ Normal(theta[i], sigma[i]^2),
theta[i]mu,tau ~ Normal(mu, tau^2),
where the theta[i] denote the trialspecific means that are then measured through the estimate y[i] with an associated measurement uncertainty sigma[i]. The theta[i] again differ from trial to trial and are distributed around a common mean μ with standard deviation τ.
The bayesmeta()
function utilizes the fact that the joint posterior
distribution p(μ, τ  y, σ) may be partly analytically
integrated out to determine the integrals necessary for coherent
Bayesian inference on one or both of the parameters.
As long as we assume that the prior probability distribution may be
factored into independent marginals p(mu, tau) = p(mu) * p(tau) and either an (improper) uniform
or a normal prior is used for the effect μ, the joint
likelihood p(yμ,τ) may be analytically marginalized over
μ, yielding the marginal likelihood function
p(yτ). Using any prior p(τ) for the heterogeneity,
the 1dimensional marginal posterior p(tauy) = p(ytau) * p(tau) * const. may
then be treated numerically. As the conditional posterior
p(μτ,y) is a normal distribution, inference on the
remaining joint (p(μ,τy)) or marginal (p(μy))
posterior may be approached numerically (using the DIRECT
algorithm) as well. Accuracy of the computations is determined by the
delta
(maximum divergence δ) and epsilon
(tail probability ε) parameters (Roever and Friede,
2017).
What constitutes a sensible prior for both μ and τ
depends (as usual) very much on the context.
Potential candidates for informative (or weakly informative)
heterogeneity (τ) priors may include the halfnormal,
halfStudentt, halfCauchy, exponential,
or Lomax distributions; we recommend the use of heavytailed
prior distributions if in case of prior/data conflict the prior is
supposed to be discounted (O'Hagan and Pericchi, 2012). A sensible
informative prior might also be a lognormal or a scaled
inverse χ^2 distribution.
One might argue that an uninformative prior for τ may be
uniform or monotonically decreasing in τ. Another option is
to use the Jeffreys prior (see the tau.prior
argument
above). The Jeffreys prior implemented here is the variant also
described by Tibshirani (1989) that results from fixing the location
parameter (μ) and considering the Fisher information element
corresponding to the heterogeneity τ only. This prior also
constitutes the Berger/Bernardo reference prior for the present
problem (Bodnar et al., 2016). The uniform shrinkage and
DuMouchel priors are described in Spiegelhalter et al. (2004,
Sec. 5.7.3).
The procedure is able to handle improper priors (and does so by
default), but of course the usual care must be taken here, as the
resulting posterior may or may not be a proper
probability distribution.
Note that a wide range of different types of endpoints may be treated
on a continuous scale after an appropriate transformation; for
example, count data may be handled by considering corresponding
logarithmic odds ratios. Many such transformations are implemented
in the metafor package's escalc
function
and may be directly used as an input to the bayesmeta()
function; see also the example below. Alternatively, the
compute.es package also provides a range of effect sizes to be
computed from different data types.
The bayesmeta()
function eventually generates some basic
summary statistics, but most importantly it provides an object
containing a range of function
s allowing to evaluate posterior
distributions; this includes joint and marginal posteriors, prior and
likelihood, predictive distributions, densities, cumulative
distributions and quantile functions. For more details see also the
documentation and examples below.
Use of the individual
argument allows to access posteriors
of studyspecific (shrinkage) effects
(theta[i]).
The predict
argument may be used to access the predictive
distribution of a future study's effect
(theta[k+1]), facilitating a
metaanalyticpredictive (MAP) approach (Neuenschwander et al.,
2010).
When specifying the tau.prior
argument as a character
string
(and not as a prior density function
), then the actual
prior probability density functions corresponding to the possible
choices of the tau.prior
argument are given by:
"uniform"
 the (improper) uniform prior in τ:
p(tau) = 1
"sqrt"
 the (improper) uniform prior in sqrt(tau):
p(tau) = tau^(1/2) = 1/sqrt(tau)
"Jeffreys"
 Tibshirani's noninformative prior,
a variation of the Jeffreys prior, which here also constitutes
the Berger/Bernardo reference prior for τ:
p(tau) = sqrt(sum((tau/(sigma[i]^2+tau^2))^2))
(This is also an improper prior whose density does not integrate to 1).
"BergerDeely"
 the (improper) Berger/Deely prior:
p(tau) = prod((tau / (sigma[i]^2+tau^2))^(1/k))
This is a variation of the above Jeffreys prior, and both are equal in case all standard errors (sigma[i]) are the same.
"conventional"
 the (proper) conventional prior:
p(tau) = prod((tau / (sigma[i]^2+tau^2)^(3/2))^(1/k))
This is a proper variation of the above Berger/Deely prior intended especially for testing and model selection purposes.
"DuMouchel"
 the (proper) DuMouchel prior:
p(tau) = s0 / (s0+tau)^2
where s0=sqrt(s0^2) and s0^2 again is the harmonic mean of the standard errors (as above).
"shrinkage"
 the (proper) uniform shrinkage prior:
p(tau)=(2*s0^2*tau) / (s0^2+tau^2)^2
where s0^2 = k/sum(sigma[i]^(2)) is the harmonic mean of the squared standard errors sigma[i]^2.
"I2"
 the (proper) uniform prior in I^2:
p(tau)=(2*sigmaHat^2*tau) /(sigmaHat^2+tau^2)^2
where sigmaHat^2 = (k1)*sum(1/sigma[i]^2) / (sum(1/sigma[i]^2)^2  sum(1/sigma[i]^4)). This prior is similar to the uniform shrinkage prior, except for the use of sigmaHat^2 instead of s0^2.
For empirically motivated informative heterogeneity priors see also
the TurnerEtAlPrior()
and RhodesEtAlPrior()
functions.
Credible intervals (as well as prediction and shrinkage intervals)
may be determined in different ways. By default, shortest
intervals are returned, which for unimodal posteriors (the usual
case) is equivalent to the highest posterior density region
(Gelman et al., 1997, Sec. 2.3).
Alternatively, central (equaltailed) intervals may also be derived.
The default behaviour may be controlled via the interval.type
argument, or also by using the method
argument with each
individual call of the $post.interval()
function (see below).
A third option, although not available for prediction or shrinkage
intervals, and hence not as an overall default choice, but only for
the $post.interval()
function, is to
determine the evidentiary credible interval, which has the
advantage of being parameterization invariant (Shalloway, 2014).
Bayes factors (Kass and Raftery, 1995) for the two hypotheses of
τ=0 and μ=0 are provided in the $bayesfactor
element; low or high values here constitute evidence
against or in favour of the hypotheses,
respectively. Bayes factors are based on marginal likelihoods and
can only be computed if the priors for heterogeneity and effect are
proper. Bayes factors for other hypotheses can be computed using the
marginal likelihood (as provided through the $marginal
element) and the $likelihood()
function. Bayes factors must
be interpreted with very much caution, as they are susceptible to
Lindley's paradox (Lindley, 1957), which especially implies
that variations of the prior specifications that have only minuscule
effects on the posterior distribution may have a substantial impact
on Bayes factors (via the marginal likelihood). For more details on
the problems and challenges related to Bayes factors see also
Gelman et al. (1997, Sec. 7.4).
Besides the ‘actual’ Bayes factors, minimum Bayes factors are also provided (Spiegelhalter et al., 2004; Sec. 4.4). The minimum Bayes factor for the hypothesis of μ=0 constitutes the minimum across all possible priors for μ and hence gives a measure of how much (or how little) evidence against the hypothesis is provided by the data at most. It is independent of the particular effect prior used in the analysis, but still dependent on the heterogeneity prior. Analogously, the same is true for the heterogeneity's minimum Bayes factor. A minimum Bayes factor can also be computed when only one of the priors is proper.
Accuracy of the numerical results is determined by four parameters,
namely, the accuracy of numerical integration as specified through the
rel.tol.integrate
and abs.tol.integrate
arguments (which
are internally passed on to the integrate
function), and the accuracy of the grid approximation used for
integrating out the heterogeneity as specified through the
delta
and epsilon
arguments (Roever and Friede,
2017). As these may also heavily impact on the computation time, be
careful when changing these from their default values. You can monitor
the effect of different settings by checking the number and range of
support points returned in the $support
element.
Conditional on a given τ value, the posterior expectations of the overall effect (μ) as well as the shrinkage estimates (theta[i]) result as convex combinations of the estimates y[i]. The weights associated with each estimate y[i] are commonly quoted in frequentist metaanalysis results in order to quantify (arguably somewhat heuristically) each study's contribution to the overall estimates, often in terms of percentages.
In a Bayesian metaanalysis, these numbers to not immediately
arise, since the heterogeneity is marginalized over. However, due
to linearity, the posterior mean effects may still be expressed in
terms of linear combinations of initial estimates y[i],
with weights now given by the posterior mean weights,
marginalized over the heterogeneity τ. The posterior mean
weights are returned in the $weights
and
$weights.theta
elements, for the overall effect μ
as well as for the shrinkage estimates theta[i].
A list of class bayesmeta
containing the following elements:
y 
vector of estimates (the input data). 
sigma 
vector of standard errors corresponding
to 
labels 
vector of labels corresponding to 
k 
number of data points (in 
tau.prior 
the prior probability density function for τ. 
mu.prior.mean 
the prior mean of μ. 
mu.prior.sd 
the prior standard deviation of μ. 
dprior 
a 
tau.prior.proper 
a 
likelihood 
a 
dposterior 
a 
pposterior 
a 
qposterior 
a 
rposterior 
a 
post.interval 
a 
cond.moment 
a 
I2 
a 
summary 
a 
interval.type 
the 
ML 
a 
MAP 
a 
theta 
a 
weights 
a 
weights.theta 
a 
marginal.likelihood 
the marginal likelihood of the data (this number is only computed if proper effect and heterogeneity priors are specified). 
bayesfactor 
Bayes factors and minimum Bayes factors for the two hypotheses of τ=0 and μ=0. These depend on the marginal likelihood and hence can only be computed if proper effect and/or heterogeneity priors are specified; see also remark above. 
support 
a 
delta, epsilon 
the ‘ 
rel.tol.integrate, abs.tol.integrate, tol.uniroot 
the input
parameters determining the numerical accuracy of the internally used

call 
an object of class 
init.time 
the computation time (in seconds) used to generate
the 
Christian Roever [email protected]
C. Roever. Bayesian randomeffects metaanalysis using the bayesmeta R package. arXiv preprint 1711.08683 (submitted for publication), 2017.
C. Roever, T. Friede. Discrete approximation of a mixture distribution via restricted divergence. Journal of Computational and Graphical Statistics, 26(1):217222, 2017.
J.O. Berger, J. Deely. A Bayesian approach to ranking and selection of related means with alternatives to analysisofvariance methodology. Journal of the American Statistical Association, 83(402):364373, 1988.
O. Bodnar, A. Link, C. Elster. Objective Bayesian inference for a generalized marginal random effects model. Bayesian Analysis, 11(1):2545, 2016.
A. Gelman, J.B. Carlin, H.S. Stern, D.B. Rubin. Bayesian data analysis. Chapman & Hall / CRC, Boca Raton, 1997.
A. Gelman. Prior distributions for variance parameters in hierarchical models. Bayesian Analysis, 1(3):515534, 2006.
J. Hartung, G. Knapp, B.K. Sinha. Statistical metaanalysis with applications. Wiley, Hoboken, 2008.
L.V. Hedges, I. Olkin. Statistical methods for metaanalysis. Academic Press, San Diego, 1985
A.E. Kass, R.E. Raftery. Bayes factors. Journal of the American Statistical Association, 90(430):773795, 1995.
D.V. Lindley. A statistical paradox. Biometrika, 44(1/2):187192, 1957.
B. Neuenschwander, G. CapkunNiggli, M. Branson, D.J. Spiegelhalter. Summarizing historical information on controls in clinical trials. Trials. 7(1):518, 2010.
A. O'Hagan, L. Pericchi. Bayesian heavytailed models and conflict resolution: A review. Brazilian Journal of Probability and Statistics. 26(4):372401, 2012.
S. Shalloway. The evidentiary credible region. Bayesian Analysis, 9(4):909922, 2014.
D.J. Spiegelhalter, K.R. Abrams, J.P.Myles. Bayesian approaches to clinical trials and healthcare evaluation. Wiley & Sons, 2004.
R. Tibshirani. Noninformative priors for one parameter of many. Biometrika, 76(3):604608, 1989.
W. Viechtbauer. Conducting metaanalyses in R with the metafor package. Journal of Statistical Software, 36(3):148, 2010.
forestplot.bayesmeta
, plot.bayesmeta
,
escalc
,
compute.es
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176  ########################################
# example data by Snedecor and Cochran:
data("SnedecorCochran")
## Not run:
# analysis using improper uniform prior
# (may take a few seconds to compute!):
ma01 < bayesmeta(y=SnedecorCochran[,"mean"], sigma=sqrt(SnedecorCochran[,"var"]),
label=SnedecorCochran[,"no"])
# analysis using an informative prior
# (normal for mu and halfCauchy for tau (scale=10))
# (may take a few seconds to compute!):
ma02 < bayesmeta(y=SnedecorCochran[,"mean"], sigma=sqrt(SnedecorCochran[,"var"]),
label=SnedecorCochran[,"no"],
mu.prior.mean=50, mu.prior.sd=50,
tau.prior=function(x){return(dhalfcauchy(x, scale=10))})
# show some summary statistics:
print(ma01)
summary(ma01)
# show some plots:
forestplot(ma01)
plot(ma01)
# compare resulting marginal densities;
# the effect parameter (mu):
mu < seq(30, 80, le=200)
plot(mu, ma02$dposterior(mu=mu), type="l", lty="dashed",
xlab=expression("effect "*mu),
ylab=expression("marginal posterior density"),
main="Snedecor/Cochran example")
lines(mu, ma01$dposterior(mu=mu), lty="solid")
# the heterogeneity parameter (tau):
tau < seq(0, 50, le=200)
plot(tau, ma02$dposterior(tau=tau), type="l", lty="dashed",
xlab=expression("heterogeneity "*tau),
ylab=expression("marginal posterior density"),
main="Snedecor/Cochran example")
lines(tau, ma01$dposterior(tau=tau), lty="solid")
# compute posterior median relative heterogeneity Isquared:
ma01$I2(tau=ma01$summary["median","tau"])
# determine 90 percent upper limits on the heterogeneity tau:
ma01$qposterior(tau=0.90)
ma02$qposterior(tau=0.90)
# determine shortest 90 percent credible interval for tau:
ma01$post.interval(tau.level=0.9, method="shortest")
## End(Not run)
#####################################
# example data by Sidik and Jonkman:
data("SidikJonkman2007")
# add logoddsratios and corresponding standard errors:
sj < SidikJonkman2007
sj < cbind(sj, "log.or"=log(sj[,"lihr.events"])log(sj[,"lihr.cases"]sj[,"lihr.events"])
log(sj[,"oihr.events"])+log(sj[,"oihr.cases"]sj[,"oihr.events"]),
"log.or.se"=sqrt(1/sj[,"lihr.events"] + 1/(sj[,"lihr.cases"]sj[,"lihr.events"])
+ 1/sj[,"oihr.events"] + 1/(sj[,"oihr.cases"]sj[,"oihr.events"])))
## Not run:
# analysis using weakly informative halfnormal prior
# (may take a few seconds to compute!):
ma03a < bayesmeta(y=sj[,"log.or"], sigma=sj[,"log.or.se"],
label=sj[,"id.sj"],
tau.prior=function(t){dhalfnormal(t,scale=1)})
# alternatively: may utilize "metafor" package's "escalc()" function
# to compute logORs and standard errors:
require("metafor")
es < escalc(measure="OR",
ai=lihr.events, n1i=lihr.cases,
ci=oihr.events, n2i=oihr.cases,
slab=id, data=SidikJonkman2007)
# apply "bayesmeta()" function directly to "escalc" object:
ma03b < bayesmeta(es, tau.prior=function(t){dhalfnormal(t,scale=1)})
# "ma03a" and "ma03b" should be identical:
print(ma03a$summary)
print(ma03b$summary)
# compare to metafor's (frequentist) randomeffects metaanalysis:
rma03a < rma.uni(es)
rma03b < rma.uni(es, method="EB", knha=TRUE)
# compare mu estimates (estimate and confidence interval):
plot(ma03b, which=3)
abline(v=c(rma03a$b, rma03a$ci.lb, rma03a$ci.ub), col="red", lty=c(1,2,2))
abline(v=c(rma03b$b, rma03b$ci.lb, rma03b$ci.ub), col="green3", lty=c(1,2,2))
# compare tau estimates (estimate and confidence interval):
plot(ma03b, which=4)
abline(v=confint(rma03a)$random["tau",], col="red", lty=c(1,2,2))
abline(v=confint(rma03b)$random["tau",], col="green3", lty=c(1,3,3))
# show heterogeneity's posterior density:
plot(ma03a, which=4, main="Sidik/Jonkman example")
# show some numbers (mode, median and mean):
abline(v=ma03a$summary[c("mode","median","mean"),"tau"], col="blue")
# compare with Sidik and Jonkman's estimates:
sj.estimates < sqrt(c("MM" = 0.429, # method of moments estimator
"VC" = 0.841, # variance component type estimator
"ML" = 0.562, # maximum likelihood estimator
"REML"= 0.598, # restricted maximum likelihood estimator
"EB" = 0.703, # empirical Bayes estimator
"MV" = 0.818, # model error variance estimator
"MVvc"= 0.747)) # a variation of the MV estimator
abline(v=sj.estimates, col="red", lty="dashed")
## End(Not run)
###########################
# example data by Cochran:
data("Cochran1954")
## Not run:
# analysis using improper uniform prior
# (may take a few seconds to compute!):
ma04 < bayesmeta(y=Cochran1954[,"mean"], sigma=sqrt(Cochran1954[,"se2"]),
label=Cochran1954[,"observer"])
# show joint posterior density:
plot(ma04, which=2, main="Cochran example")
# show (known) true parameter value:
abline(h=161)
# pick a point estimate for tau:
tau < ma04$summary["median","tau"]
# highlight two point hypotheses (fixed vs. random effects):
abline(v=c(0, tau), col="orange", lty="dotted", lwd=2)
# show marginal posterior density:
plot(ma04, which=3)
abline(v=161)
# show the conditional distributions of the effect mu
# at two tau values corresponding to fixed and random effects models:
cm < ma04$cond.moment(tau=c(0,tau))
mu < seq(130,200, le=200)
lines(mu, dnorm(mu, mean=cm[1,"mean"], sd=cm[1,"sd"]), col="orange", lwd=2)
lines(mu, dnorm(mu, mean=cm[2,"mean"], sd=cm[2,"sd"]), col="orange", lwd=2)
# determine a range of tau values:
tau < seq(0, ma04$qposterior(tau=0.99), length=100)
# compute conditional posterior moments:
cm.overall < ma04$cond.moment(tau=tau)
# compute studyspecific conditional posterior moments:
cm.indiv < ma04$cond.moment(tau=tau, individual=TRUE)
# show forest plot along with conditional posterior means:
par(mfrow=c(1,2))
plot(ma04, which=1, main="Cochran 1954 example")
matplot(tau, cm.indiv[,"mean",], type="l", lty="solid", col=1:ma04$k,
xlim=c(0,max(tau)*1.2), xlab=expression("heterogeneity "*tau),
ylab=expression("(conditional) shrinkage estimate E["*
theta[i]*""*list(tau, y, sigma)*"]"))
text(rep(max(tau)*1.01, ma04$k), cm.indiv[length(tau),"mean",],
ma04$label, col=1:ma04$k, adj=c(0,0.5))
lines(tau, cm.overall[,"mean"], lty="dashed", lwd=2)
text(max(tau)*1.01, cm.overall[length(tau),"mean"],
"overall", adj=c(0,0.5))
par(mfrow=c(1,1))
# show the individual effects' posterior distributions:
theta < seq(120, 240, le=300)
plot(range(theta), c(0,0.1), type="n", xlab=expression(theta[i]), ylab="")
for (i in 1:ma04$k) {
# draw estimate +/ uncertainty as a Gaussian:
lines(theta, dnorm(theta, mean=ma04$y[i], sd=ma04$sigma[i]), col=i+1, lty="dotted")
# draw effect's posterior distribution:
lines(theta, ma04$dposterior(theta=theta, indiv=i), col=i+1, lty="solid")
}
abline(h=0)
legend(max(theta), 0.1, legend=ma04$label, col=(1:ma04$k)+1, pch=15, xjust=1, yjust=1)
## End(Not run)

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.