Description Usage Arguments Details Value Author(s) References See Also Examples
Calculation of an overall proportion from studies reporting a single
proportion. Inverse variance method and generalised linear mixed
model (GLMM) are available for pooling. For GLMMs, the
rma.glmm
function from R package
metafor (Viechtbauer 2010) is called internally.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26  metaprop(event, n, studlab,
data=NULL, subset=NULL, exclude=NULL,
method = "Inverse",
sm=gs("smprop"),
incr=gs("incr"), allincr=gs("allincr"),
addincr=gs("addincr"),
method.ci=gs("method.ci"),
level=gs("level"), level.comb=gs("level.comb"),
comb.fixed=gs("comb.fixed"), comb.random=gs("comb.random"),
hakn=gs("hakn"),
method.tau=
ifelse(!is.na(charmatch(tolower(method), "glmm", nomatch = NA)),
"ML", gs("method.tau")),
tau.preset=NULL, TE.tau=NULL,
tau.common=gs("tau.common"),
prediction=gs("prediction"), level.predict=gs("level.predict"),
null.effect=NA,
method.bias=gs("method.bias"),
backtransf=gs("backtransf"),
pscale=1,
title=gs("title"), complab=gs("complab"), outclab="",
byvar, bylab, print.byvar=gs("print.byvar"),
byseparator = gs("byseparator"),
keepdata=gs("keepdata"),
warn=gs("warn"),
...)

event 
Number of events. 
n 
Number of observations. 
studlab 
An optional vector with study labels. 
data 
An optional data frame containing the study information, i.e., event and n. 
subset 
An optional vector specifying a subset of studies to be used. 
exclude 
An optional vector specifying studies to exclude from metaanalysis, however, to include in printouts and forest plots. 
method 
A character string indicating which method is to be
used for pooling of studies. One of 
sm 
A character string indicating which summary measure
( 
incr 
A numeric which is added to event number and sample size of studies with zero or all events, i.e., studies with an event probability of either 0 or 1. 
allincr 
A logical indicating if 
addincr 
A logical indicating if 
method.ci 
A character string indicating which method is used to calculate confidence intervals for individual studies, see Details. 
level 
The level used to calculate confidence intervals for individual studies. 
level.comb 
The level used to calculate confidence intervals for pooled estimates. 
comb.fixed 
A logical indicating whether a fixed effect metaanalysis should be conducted. 
comb.random 
A logical indicating whether a random effects metaanalysis should be conducted. 
prediction 
A logical indicating whether a prediction interval should be printed. 
level.predict 
The level used to calculate prediction interval for a new study. 
hakn 
A logical indicating whether the method by Hartung and Knapp should be used to adjust test statistics and confidence intervals. 
method.tau 
A character string indicating which method is used to estimate the betweenstudy variance τ^2, see Details. 
tau.preset 
Prespecified value for the squareroot of the betweenstudy variance τ^2. 
TE.tau 
Overall treatment effect used to estimate the betweenstudy variance tausquared. 
tau.common 
A logical indicating whether tausquared should be the same across subgroups. 
null.effect 
A numeric value specifying the effect under the null hypothesis. 
method.bias 
A character string indicating which test is to be
used. Either 
backtransf 
A logical indicating whether results for
transformed proportions (argument 
pscale 
A numeric defining a scaling factor for printing of single event probabilities. 
title 
Title of metaanalysis / systematic review. 
complab 
Comparison label. 
outclab 
Outcome label. 
byvar 
An optional vector containing grouping information (must
be of same length as 
bylab 
A character string with a label for the grouping variable. 
print.byvar 
A logical indicating whether the name of the grouping variable should be printed in front of the group labels. 
byseparator 
A character string defining the separator between label and levels of grouping variable. 
keepdata 
A logical indicating whether original data (set) should be kept in meta object. 
warn 
A logical indicating whether the addition of 
... 
Additional arguments passed on to

Fixed effect and random effects metaanalysis of single proportions to calculate an overall proportion. The following transformations of proportions are implemented to calculate an overall proportion:
Logit transformation (sm="PLOGIT"
, default)
Log transformation (sm="PLN"
)
FreemanTukey Double arcsine transformation (sm="PFT"
)
Arcsine transformation (sm="PAS"
)
Raw, i.e. untransformed, proportions (sm="PRAW"
)
Note, you should use R function metabin
to compare
proportions of pairwise comparisons instead of using metaprop
for each treatment arm separately which will break randomisation in
randomised controlled trials.
Various methods are available to calculate confidence intervals for individual study results (see Agresti & Coull 1998; Newcombe 1988):
ClopperPearson interval also called 'exact' binomial interval
(method.ci="CP"
, default)
Wilson Score interval (method.ci="WS"
)
Wilson Score interval with continuity correction
(method.ci="WSCC"
)
AgrestiCoull interval (method.ci="AC"
)
Simple approximation interval (method.ci="SA"
)
Simple approximation interval with continuity correction
(method.ci="SACC"
)
Normal approximation interval based on summary measure,
i.e. defined by argument sm
(method.ci="NAsm"
)
Note, with exception of the normal approximation based on the
summary measure, i.e. method.ci="NAsm"
, the same confidence
interval is calculated for any summary measure (argument sm
)
as only number of events and observations are used in the
calculation disregarding the chosen summary measure. Results will be
presented for transformed proportions if argument
backtransf=FALSE
in the print.meta
,
print.summary.meta
, or forest.meta
function. In this case, argument method.ci="NAsm"
is used,
i.e. confidence intervals based on the normal approximation based on
the summary measure.
Argument pscale
can be used to rescale proportions,
e.g. pscale=1000
means that proportions are expressed as
events per 1000 observations. This is useful in situations with
(very) low event probabilities.
For several arguments defaults settings are utilised (assignments
using gs
function). These defaults can be changed
using the settings.meta
function.
Internally, both fixed effect and random effects models are
calculated regardless of values choosen for arguments
comb.fixed
and comb.random
. Accordingly, the estimate
for the random effects model can be extracted from component
TE.random
of an object of class "meta"
even if
argument comb.random=FALSE
. However, all functions in R
package meta will adequately consider the values for
comb.fixed
and comb.random
. E.g. function
print.meta
will not print results for the random
effects model if comb.random=FALSE
.
A distinctive and frequently overlooked advantage of binary data is
that individual patient data (IPD) can be extracted. Accordingly, a
random intercept logistic regression model can be utilised for the
metaanalysis of proportions (Stijnen et al., 2010). This method is
available (argument method = "GLMM"
) by calling the
rma.glmm
function from R package
metafor internally.
If the summary measure is equal to "PRAW", "PLN", or "PLOGIT", a
continuity correction is applied if any study has either zero or all
events, i.e., an event probability of either 0 or 1. By default, 0.5
is used as continuity correction (argument incr
). This
continuity correction is used both to calculate individual study
results with confidence limits and to conduct metaanalysis based on
the inverse variance method. For GLMMs no continuity correction is
used.
Argument byvar
can be used to conduct subgroup analysis for
all methods but GLMMs. Instead use the metareg
function for GLMMs which can also be used for continuous covariates.
A prediction interval for treatment effect of a new study is
calculated (Higgins et al., 2009) if arguments prediction
and
comb.random
are TRUE
.
R function update.meta
can be used to redo the
metaanalysis of an existing metaprop object by only specifying
arguments which should be changed.
For the random effects, the method by Hartung and Knapp (2003) is
used to adjust test statistics and confidence intervals if argument
hakn=TRUE
.
The DerSimonianLaird estimate (1986) is used in the random effects
model if method.tau="DL"
. The iterative PauleMandel method
(1982) to estimate the betweenstudy variance is used if argument
method.tau="PM"
. Internally, R function paulemandel
is
called which is based on R function mpaule.default from R package
metRology from S.L.R. Ellison <s.ellison at lgc.co.uk>.
If R package metafor (Viechtbauer 2010) is installed, the
following methods to estimate the betweenstudy variance
τ^2 (argument method.tau
) are also available:
Restricted maximumlikelihood estimator (method.tau="REML"
)
Maximumlikelihood estimator (method.tau="ML"
)
HunterSchmidt estimator (method.tau="HS"
)
SidikJonkman estimator (method.tau="SJ"
)
Hedges estimator (method.tau="HE"
)
Empirical Bayes estimator (method.tau="EB"
).
For these methods the R function rma.uni
of R package
metafor is called internally. See help page of R function
rma.uni
for more details on these methods to estimate
betweenstudy variance.
An object of class c("metaprop", "meta")
with corresponding
print
, summary
, and forest
functions. The
object is a list containing the following components:
event, n, studlab, exclude, 

sm, incr, allincr, addincr, method.ci, 

level, level.comb, 
As defined above. 
comb.fixed, comb.random, 

hakn, method.tau, tau.preset, TE.tau, null.hypothesis, 

method.bias, tau.common, title, complab, outclab, 

byvar, bylab, print.byvar, byseparator, warn 

TE, seTE 
Estimated (un)transformed proportion and its standard error for individual studies. 
lower, upper 
Lower and upper confidence interval limits for individual studies. 
zval, pval 
zvalue and pvalue for test of treatment effect for individual studies. 
w.fixed, w.random 
Weight of individual studies (in fixed and random effects model). 
TE.fixed, seTE.fixed 
Estimated overall (un)transformed proportion and standard error (fixed effect model). 
lower.fixed, upper.fixed 
Lower and upper confidence interval limits (fixed effect model). 
zval.fixed, pval.fixed 
zvalue and pvalue for test of overall effect (fixed effect model). 
TE.random, seTE.random 
Estimated overall (un)transformed proportion and standard error (random effects model). 
lower.random, upper.random 
Lower and upper confidence interval limits (random effects model). 
zval.random, pval.random 
zvalue or tvalue and corresponding pvalue for test of overall effect (random effects model). 
prediction, level.predict 
As defined above. 
seTE.predict 
Standard error utilised for prediction interval. 
lower.predict, upper.predict 
Lower and upper limits of prediction interval. 
k 
Number of studies combined in metaanalysis. 
Q 
Heterogeneity statistic Q. 
tau 
Squareroot of betweenstudy variance. 
se.tau 
Standard error of squareroot of betweenstudy variance. 
C 
Scaling factor utilised internally to calculate common tausquared across subgroups. 
method 
A character string indicating method used
for pooling: 
df.hakn 
Degrees of freedom for test of treatment effect for
HartungKnapp method (only if 
bylevs 
Levels of grouping variable  if 
TE.fixed.w, seTE.fixed.w 
Estimated treatment effect and
standard error in subgroups (fixed effect model)  if 
lower.fixed.w, upper.fixed.w 
Lower and upper confidence
interval limits in subgroups (fixed effect model)  if

zval.fixed.w, pval.fixed.w 
zvalue and pvalue for test of
treatment effect in subgroups (fixed effect model)  if

TE.random.w, seTE.random.w 
Estimated treatment effect and
standard error in subgroups (random effects model)  if

lower.random.w, upper.random.w 
Lower and upper confidence
interval limits in subgroups (random effects model)  if

zval.random.w, pval.random.w 
zvalue or tvalue and
corresponding pvalue for test of treatment effect in subgroups
(random effects model)  if 
w.fixed.w, w.random.w 
Weight of subgroups (in fixed and
random effects model)  if 
df.hakn.w 
Degrees of freedom for test of treatment effect for
HartungKnapp method in subgroups  if 
n.harmonic.mean.w 
Harmonic mean of number of observations in
subgroups (for back transformation of FreemanTukey Double arcsine
transformation)  if 
event.w 
Number of events in subgroups  if 
n.w 
Number of observations in subgroups  if 
k.w 
Number of studies combined within subgroups  if

k.all.w 
Number of all studies in subgroups  if 
Q.w 
Heterogeneity statistics within subgroups  if

Q.w.fixed 
Overall within subgroups heterogeneity statistic Q
(based on fixed effect model)  if 
Q.w.random 
Overall within subgroups heterogeneity statistic Q
(based on random effects model)  if 
df.Q.w 
Degrees of freedom for test of overall within
subgroups heterogeneity  if 
Q.b.fixed 
Overall between subgroups heterogeneity statistic Q
(based on fixed effect model)  if 
Q.b.random 
Overall between subgroups heterogeneity statistic
Q (based on random effects model)  if 
df.Q.b 
Degrees of freedom for test of overall between
subgroups heterogeneity  if 
tau.w 
Squareroot of betweenstudy variance within subgroups
 if 
C.w 
Scaling factor utilised internally to calculate common
tausquared across subgroups  if 
H.w 
Heterogeneity statistic H within subgroups  if

lower.H.w, upper.H.w 
Lower and upper confidence limti for
heterogeneity statistic H within subgroups  if 
I2.w 
Heterogeneity statistic I2 within subgroups  if

lower.I2.w, upper.I2.w 
Lower and upper confidence limti for
heterogeneity statistic I2 within subgroups  if 
incr.event 
Increment added to number of events. 
keepdata 
As defined above. 
data 
Original data (set) used in function call (if

subset 
Information on subset of original data used in
metaanalysis (if 
.glmm.fixed 
GLMM object generated by call of

.glmm.random 
GLMM object generated by call of

call 
Function call. 
version 
Version of R package meta used to create object. 
version.metafor 
Version of R package metafor used for GLMMs. 
Guido Schwarzer [email protected]
Agresti A & Coull BA (1998), Approximate is better than “exact” for interval estimation of binomial proportions. The American Statistician, 52, 119–126.
DerSimonian R & Laird N (1986), Metaanalysis in clinical trials. Controlled Clinical Trials, 7, 177–188.
Edward JM et al. (2006), Adherence to antiretroviral therapy in subsaharan Africa and North America  a metaanalysis. Journal of the American Medical Association, 296, 679–690.
Freeman MF & Tukey JW (1950), Transformations related to the angular and the square root. Annals of Mathematical Statistics, 21, 607–611.
Higgins JPT, Thompson SG, Spiegelhalter DJ (2009), A reevaluation of randomeffects metaanalysis. Journal of the Royal Statistical Society: Series A, 172, 137–159.
Knapp G & Hartung J (2003), Improved Tests for a Random Effects Metaregression with a Single Covariate. Statistics in Medicine, 22, 2693–2710, doi: 10.1002/sim.1482 .
Miller JJ (1978), The inverse of the FreemanTukey double arcsine transformation. The American Statistician, 32, 138.
Newcombe RG (1998), Twosided confidence intervals for the single proportion: Comparison of seven methods. Statistics in Medicine, 17, 857–872.
Paule RC & Mandel J (1982), Consensus values and weighting factors. Journal of Research of the National Bureau of Standards, 87, 377–385.
Pettigrew HM, Gart JJ, Thomas DG (1986), The bias and higher cumulants of the logarithm of a binomial variate. Biometrika, 73, 425–435.
Stijnen T, Hamza TH, Ozdemir P (2010), Random effects metaanalysis of event outcome in the framework of the generalized linear mixed model with applications in sparse data. Statistics in Medicine, 29, 3046–67.
Viechtbauer W (2010), Conducting MetaAnalyses in R with the Metafor Package. Journal of Statistical Software, 36, 1–48.
update.meta
, metacont
, metagen
, print.meta
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150  #
# Apply various metaanalysis methods to estimate proportions
#
m1 < metaprop(4:1, 10 * 1:4)
m2 < update(m1, sm="PAS")
m3 < update(m1, sm="PRAW")
m4 < update(m1, sm="PLN")
m5 < update(m1, sm="PFT")
#
m1
m2
m3
m4
m5
#
forest(m1)
# forest(m2)
# forest(m3)
# forest(m3, pscale=100)
# forest(m4)
# forest(m5)
#
# Do not back transform results, e.g. print logit transformed
# proportions if sm="PLOGIT" and store old settings
#
oldset < settings.meta(backtransf=FALSE)
#
m6 < metaprop(4:1, c(10, 20, 30, 40))
m7 < update(m6, sm="PAS")
m8 < update(m6, sm="PRAW")
m9 < update(m6, sm="PLN")
m10 < update(m6, sm="PFT")
#
forest(m6)
# forest(m7)
# forest(m8)
# forest(m8, pscale=100)
# forest(m9)
# forest(m10)
#
# Use old settings
#
settings.meta(oldset)
#
# Examples with zero events
#
m1 < metaprop(c(0, 0, 10, 10), rep(100, 4))
m2 < metaprop(c(0, 0, 10, 10), rep(100, 4), incr=0.1)
#
summary(m1)
summary(m2)
#
# forest(m1)
# forest(m2)
#
# Example from Miller (1978):
#
death < c(3, 6, 10, 1)
animals < c(11, 17, 21, 6)
#
m3 < metaprop(death, animals, sm="PFT")
forest(m3)
#
# Data examples from Newcombe (1998)
#  apply various methods to estimate confidence intervals for
# individual studies
#
event < c(81, 15, 0, 1)
n < c(263, 148, 20, 29)
#
m1 < metaprop(event, n, sm="PLOGIT", method.ci="SA")
m2 < update(m1, method.ci="SACC")
m3 < update(m1, method.ci="WS")
m4 < update(m1, method.ci="WSCC")
m5 < update(m1, method.ci="CP")
#
lower < round(rbind(NA, m1$lower, m2$lower, NA, m3$lower, m4$lower, NA, m5$lower), 4)
upper < round(rbind(NA, m1$upper, m2$upper, NA, m3$upper, m4$upper, NA, m5$upper), 4)
#
tab1 < data.frame(
scen1=meta:::formatCI(lower[,1], upper[,1]),
scen2=meta:::formatCI(lower[,2], upper[,2]),
scen3=meta:::formatCI(lower[,3], upper[,3]),
scen4=meta:::formatCI(lower[,4], upper[,4]),
stringsAsFactors=FALSE
)
names(tab1) < c("r=81, n=263", "r=15, n=148", "r=0, n=20", "r=1, n=29")
row.names(tab1) < c("Simple", " SA", " SACC",
"Score", " WS", " WSCC",
"Binomial", " CP")
tab1[is.na(tab1)] < ""
#
# Newcombe (1998), Table I, methods 15:
#
tab1
#
# Same confidence interval, i.e. unaffected by choice of summary measure
#
print(metaprop(event, n, sm="PLOGIT", method.ci="WS"), ma=FALSE)
print(metaprop(event, n, sm="PLN", method.ci="WS"), ma=FALSE)
print(metaprop(event, n, sm="PFT", method.ci="WS"), ma=FALSE)
print(metaprop(event, n, sm="PAS", method.ci="WS"), ma=FALSE)
print(metaprop(event, n, sm="PRAW", method.ci="WS"), ma=FALSE)
#
# Different confidence intervals as argument sm="NAsm"
#
print(metaprop(event, n, sm="PLOGIT", method.ci="NAsm"), ma=FALSE)
print(metaprop(event, n, sm="PLN", method.ci="NAsm"), ma=FALSE)
print(metaprop(event, n, sm="PFT", method.ci="NAsm"), ma=FALSE)
print(metaprop(event, n, sm="PAS", method.ci="NAsm"), ma=FALSE)
print(metaprop(event, n, sm="PRAW", method.ci="NAsm"), ma=FALSE)
#
# Different confidence intervals as argument backtransf=FALSE.
# Accordingly, method.ci="NAsm" used internally.
#
print(metaprop(event, n, sm="PLOGIT", method.ci="WS"), ma=FALSE, backtransf=FALSE)
print(metaprop(event, n, sm="PLN", method.ci="WS"), ma=FALSE, backtransf=FALSE)
print(metaprop(event, n, sm="PFT", method.ci="WS"), ma=FALSE, backtransf=FALSE)
print(metaprop(event, n, sm="PAS", method.ci="WS"), ma=FALSE, backtransf=FALSE)
print(metaprop(event, n, sm="PRAW", method.ci="WS"), ma=FALSE, backtransf=FALSE)
#
# Same results (printed on original and log scale, respectively)
#
print(metaprop(event, n, sm="PLN", method.ci="NAsm"), ma=FALSE)
print(metaprop(event, n, sm="PLN"), ma=FALSE, backtransf=FALSE)
# Results for first study (on log scale)
round(log(c(0.3079848, 0.2569522, 0.3691529)), 4)
#
# Metaanalysis using generalised linear mixed models
# (only if R packages 'metafor' and 'lme4' are available)
#
if (suppressMessages(require(metafor, quietly = TRUE, warn = FALSE)) &
require(lme4, quietly = TRUE))
metaprop(event, n, method = "GLMM")
#
# Print results as events per 1000 observations
#
print(metaprop(6:8, c(100, 1200, 1000)), pscale = 1000, digits = 1)

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.