metaprop  R Documentation 
Calculation of an overall proportion from studies reporting a
single proportion. Inverse variance method and generalised linear
mixed model (GLMM) are available for pooling. For GLMMs, the
rma.glmm
function from R package
metafor (Viechtbauer 2010) is called internally.
metaprop( event, n, studlab, data = NULL, subset = NULL, exclude = NULL, cluster = NULL, method, sm = gs("smprop"), incr = gs("incr"), method.incr = gs("method.incr"), method.ci = gs("method.ci.prop"), level = gs("level"), common = gs("common"), random = gs("random")  !is.null(tau.preset), overall = common  random, overall.hetstat = common  random, prediction = gs("prediction"), method.tau = ifelse(!is.na(charmatch(tolower(method), "glmm", nomatch = NA)), "ML", gs("method.tau")), method.tau.ci = gs("method.tau.ci"), tau.preset = NULL, TE.tau = NULL, tau.common = gs("tau.common"), level.ma = gs("level.ma"), method.random.ci = gs("method.random.ci"), adhoc.hakn.ci = gs("adhoc.hakn.ci"), level.predict = gs("level.predict"), method.predict = gs("method.predict"), adhoc.hakn.pi = gs("adhoc.hakn.pi"), null.effect = NA, method.bias = gs("method.bias"), backtransf = gs("backtransf"), pscale = 1, text.common = gs("text.common"), text.random = gs("text.random"), text.predict = gs("text.predict"), text.w.common = gs("text.w.common"), text.w.random = gs("text.w.random"), title = gs("title"), complab = gs("complab"), outclab = "", subgroup, subgroup.name = NULL, print.subgroup.name = gs("print.subgroup.name"), sep.subgroup = gs("sep.subgroup"), test.subgroup = gs("test.subgroup"), prediction.subgroup = gs("prediction.subgroup"), byvar, hakn, adhoc.hakn, keepdata = gs("keepdata"), warn = gs("warn"), warn.deprecated = gs("warn.deprecated"), control = NULL, ... )
event 
Number of events. 
n 
Number of observations. 
studlab 
An optional vector with study labels. 
data 
An optional data frame containing the study information, i.e., event and n. 
subset 
An optional vector specifying a subset of studies to be used. 
exclude 
An optional vector specifying studies to exclude from metaanalysis, however, to include in printouts and forest plots. 
cluster 
An optional vector specifying which estimates come from the same cluster resulting in the use of a threelevel metaanalysis model. 
method 
A character string indicating which method is to be
used for pooling of studies. One of 
sm 
A character string indicating which summary measure
( 
incr 
A numeric which is added to event number and sample size of studies with zero or all events, i.e., studies with an event probability of either 0 or 1. 
method.incr 
A character string indicating which continuity
correction method should be used ( 
method.ci 
A character string indicating which method is used to calculate confidence intervals for individual studies, see Details. 
level 
The level used to calculate confidence intervals for individual studies. 
common 
A logical indicating whether a common effect metaanalysis should be conducted. 
random 
A logical indicating whether a random effects metaanalysis should be conducted. 
overall 
A logical indicating whether overall summaries should be reported. This argument is useful in a metaanalysis with subgroups if overall results should not be reported. 
overall.hetstat 
A logical value indicating whether to print heterogeneity measures for overall treatment comparisons. This argument is useful in a metaanalysis with subgroups if heterogeneity statistics should only be printed on subgroup level. 
prediction 
A logical indicating whether a prediction interval should be printed. 
method.tau 
A character string indicating which method is
used to estimate the betweenstudy variance τ^2 and its
square root τ (see 
method.tau.ci 
A character string indicating which method is
used to estimate the confidence interval of τ^2 and
τ (see 
tau.preset 
Prespecified value for the square root of the betweenstudy variance τ^2. 
TE.tau 
Overall treatment effect used to estimate the betweenstudy variance tausquared. 
tau.common 
A logical indicating whether tausquared should be the same across subgroups. 
level.ma 
The level used to calculate confidence intervals for metaanalysis estimates. 
method.random.ci 
A character string indicating which method
is used to calculate confidence interval and test statistic for
random effects estimate (see 
adhoc.hakn.ci 
A character string indicating whether an
ad hoc variance correction should be applied in the case
of an arbitrarily small HartungKnapp variance estimate (see

level.predict 
The level used to calculate prediction interval for a new study. 
method.predict 
A character string indicating which method is
used to calculate a prediction interval (see

adhoc.hakn.pi 
A character string indicating whether an
ad hoc variance correction should be applied for
prediction interval (see 
null.effect 
A numeric value specifying the effect under the null hypothesis. 
method.bias 
A character string indicating which test is to
be used. Either 
backtransf 
A logical indicating whether results for
transformed proportions (argument 
pscale 
A numeric defining a scaling factor for printing of single event probabilities. 
text.common 
A character string used in printouts and forest plot to label the pooled common effect estimate. 
text.random 
A character string used in printouts and forest plot to label the pooled random effects estimate. 
text.predict 
A character string used in printouts and forest plot to label the prediction interval. 
text.w.common 
A character string used to label weights of common effect model. 
text.w.random 
A character string used to label weights of random effects model. 
title 
Title of metaanalysis / systematic review. 
complab 
Comparison label. 
outclab 
Outcome label. 
subgroup 
An optional vector to conduct a metaanalysis with subgroups. 
subgroup.name 
A character string with a name for the subgroup variable. 
print.subgroup.name 
A logical indicating whether the name of the subgroup variable should be printed in front of the group labels. 
sep.subgroup 
A character string defining the separator between name of subgroup variable and subgroup label. 
test.subgroup 
A logical value indicating whether to print results of test for subgroup differences. 
prediction.subgroup 
A logical indicating whether prediction intervals should be printed for subgroups. 
byvar 
Deprecated argument (replaced by 'subgroup'). 
hakn 
Deprecated argument (replaced by 'method.random.ci'). 
adhoc.hakn 
Deprecated argument (replaced by 'adhoc.hakn.ci'). 
keepdata 
A logical indicating whether original data (set) should be kept in meta object. 
warn 
A logical indicating whether the addition of

warn.deprecated 
A logical indicating whether warnings should be printed if deprecated arguments are used. 
control 
An optional list to control the iterative process to
estimate the betweenstudy variance τ^2. This argument
is passed on to 
... 
Additional arguments passed on to

This function provides methods for common effect and random effects
metaanalysis of single proportions to calculate an overall
proportion. Note, you should use R function metabin
to compare proportions of pairwise comparisons instead of using
metaprop
for each treatment arm separately which will break
randomisation in randomised controlled trials.
The following transformations of proportions are implemented to calculate an overall proportion:
Logit transformation (sm = "PLOGIT"
, default)
Arcsine transformation (sm = "PAS"
)
FreemanTukey Double arcsine transformation (sm = "PFT"
)
Log transformation (sm = "PLN"
)
Raw, i.e. untransformed, proportions (sm = "PRAW"
)
A generalised linear mixed model (GLMM)  more specific, a random
intercept logistic regression model  can be utilised for the
metaanalysis of proportions (Stijnen et al., 2010). This is the
default method for the logit transformation (argument sm =
"PLOGIT"
). Internally, the rma.glmm
function from R package metafor is called to fit a GLMM.
Classic metaanalysis (Borenstein et al., 2010) utilising the
(un)transformed proportions and corresponding standard errors in
the inverse variance method is conducted by calling the
metagen
function internally. This is the only
available method for all transformations but the logit
transformation. The classic metaanalysis model with logit
transformed proportions is used by setting argument method =
"Inverse"
.
A threelevel random effects metaanalysis model (Van den Noortgate
et al., 2013) is utilized if argument cluster
is used and at
least one cluster provides more than one estimate. Internally,
rma.mv
is called to conduct the analysis and
weights.rma.mv
with argument type =
"rowsum"
is used to calculate random effects weights.
Default settings are utilised for several arguments (assignments
using gs
function). These defaults can be changed for
the current R session using the settings.meta
function.
Furthermore, R function update.meta
can be used to
rerun a metaanalysis with different settings.
Contradictory recommendations on the use of transformations of proportions have been published in the literature. For example, Barendregt et al. (2013) recommend the use of the FreemanTukey double arcsine transformation instead of the logit transformation whereas Warton & Hui (2011) strongly advise to use generalised linear mixed models with the logit transformation instead of the arcsine transformation.
Schwarzer et al. (2019) describe seriously misleading results in a metaanalysis with very different sample sizes due to problems with the backtransformation of the FreemanTukey transformation which requires a single sample size (Miller, 1978). Accordingly, Schwarzer et al. (2019) also recommend to use GLMMs for the metaanalysis of single proportions, however, admit that individual study weights are not available with this method. Metaanalysts which require individual study weights should consider the inverse variance method with the arcsine or logit transformation.
In order to prevent misleading conclusions for the FreemanTukey double arcsine transformation, sensitivity analyses using other transformations or using a range of sample sizes should be conducted (Schwarzer et al., 2019).
Three approaches are available to apply a continuity correction:
Only studies with a zero cell count (method.incr =
"only0"
)
All studies if at least one study has a zero cell count
(method.incr = "if0all"
)
All studies irrespective of zero cell counts
(method.incr = "all"
)
If the summary measure is equal to "PLOGIT", "PLN", or "PRAW", the continuity correction is applied if a study has either zero or all events, i.e., an event probability of either 0 or 1.
By default, 0.5 is used as continuity correction (argument
incr
). This continuity correction is used both to calculate
individual study results with confidence limits and to conduct
metaanalysis based on the inverse variance method. For GLMMs no
continuity correction is used.
Various methods are available to calculate confidence intervals for individual study results (see Agresti & Coull 1998 and Newcombe 1988):
ClopperPearson interval also called 'exact' binomial
interval (method.ci = "CP"
, default)
Wilson Score interval (method.ci = "WS"
)
Wilson Score interval with continuity correction
(method.ci = "WSCC"
)
AgrestiCoull interval (method.ci = "AC"
)
Simple approximation interval (method.ci = "SA"
)
Simple approximation interval with continuity correction
(method.ci = "SACC"
)
Normal approximation interval based on summary measure,
i.e. defined by argument sm
(method.ci = "NAsm"
)
Note, with exception of the normal approximation based on the
summary measure, i.e. method.ci = "NAsm"
, the same
confidence interval is calculated for individual studies for any
summary measure (argument sm
) as only number of events and
observations are used in the calculation disregarding the chosen
transformation.
Results will be presented for transformed proportions if argument
backtransf = FALSE
. In this case, argument method.ci =
"NAsm"
is used, i.e. confidence intervals based on the normal
approximation based on the summary measure.
Argument subgroup
can be used to conduct subgroup analysis for
a categorical covariate. The metareg
function can be
used instead for more than one categorical covariate or continuous
covariates.
Argument null.effect
can be used to specify the proportion
used under the null hypothesis in a test for an overall effect.
By default (null.effect = NA
), no hypothesis test is
conducted as it is unclear which value is a sensible choice for the
data at hand. An overall proportion of 50%, for example, could be
tested by setting argument null.effect = 0.5
.
Note, all tests for an overall effect are twosided with the
alternative hypothesis that the effect is unequal to
null.effect
.
Arguments subset
and exclude
can be used to exclude
studies from the metaanalysis. Studies are removed completely from
the metaanalysis using argument subset
, while excluded
studies are shown in printouts and forest plots using argument
exclude
(see Examples in metagen
).
Metaanalysis results are the same for both arguments.
Internally, both common effect and random effects models are
calculated regardless of values choosen for arguments
common
and random
. Accordingly, the estimate
for the random effects model can be extracted from component
TE.random
of an object of class "meta"
even if
argument random = FALSE
. However, all functions in R
package meta will adequately consider the values for
common
and random
. E.g. function
print.meta
will not print results for the random
effects model if random = FALSE
.
Argument pscale
can be used to rescale proportions, e.g.
pscale = 1000
means that proportions are expressed as events
per 1000 observations. This is useful in situations with (very) low
event probabilities.
A prediction interval will only be shown if prediction =
TRUE
.
An object of class c("metaprop", "meta")
with corresponding
generic functions (see metaobject
).
Guido Schwarzer sc@imbi.unifreiburg.de
Agresti A & Coull BA (1998): Approximate is better than "exact" for interval estimation of binomial proportions. The American Statistician, 52, 119–26
Barendregt JJ, Doi SA, Lee YY, Norman RE, Vos T (2013): Metaanalysis of prevalence. Journal of Epidemiology and Community Health, 67, 974–8
Borenstein M, Hedges LV, Higgins JP, Rothstein HR (2010): A basic introduction to fixedeffect and randomeffects models for metaanalysis. Research Synthesis Methods, 1, 97–111
Freeman MF & Tukey JW (1950): Transformations related to the angular and the square root. Annals of Mathematical Statistics, 21, 607–11
Miller JJ (1978): The inverse of the FreemanTukey double arcsine transformation. The American Statistician, 32, 138
Newcombe RG (1998): Twosided confidence intervals for the single proportion: comparison of seven methods. Statistics in Medicine, 17, 857–72
Pettigrew HM, Gart JJ, Thomas DG (1986): The bias and higher cumulants of the logarithm of a binomial variate. Biometrika, 73, 425–35
Schwarzer G, Chemaitelly H, AbuRaddad LJ, Rücker G (2019): Seriously misleading results using inverse of FreemanTukey double arcsine transformation in metaanalysis of single proportions. Research Synthesis Methods, 10, 476–83
Stijnen T, Hamza TH, Ozdemir P (2010): Random effects metaanalysis of event outcome in the framework of the generalized linear mixed model with applications in sparse data. Statistics in Medicine, 29, 3046–67
Van den Noortgate W, LópezLópez JA, MarínMartínez F, SánchezMeca J (2013): Threelevel metaanalysis of dependent effect sizes. Behavior Research Methods, 45, 576–94
Viechtbauer W (2010): Conducting metaanalyses in R with the metafor package. Journal of Statistical Software, 36, 1–48
Warton DI, Hui FKC (2011): The arcsine is asinine: the analysis of proportions in ecology. Ecology, 92, 3–10
metapackage
, update.meta
,
metacont
, metagen
,
print.meta
, forest.meta
# Metaanalysis using generalised linear mixed model # metaprop(4:1, 10 * 1:4) # Apply various classic metaanalysis methods to estimate # proportions # m1 < metaprop(4:1, 10 * 1:4, method = "Inverse") m2 < update(m1, sm = "PAS") m3 < update(m1, sm = "PRAW") m4 < update(m1, sm = "PLN") m5 < update(m1, sm = "PFT") # m1 m2 m3 m4 m5 # forest(m1) ## Not run: forest(m2) forest(m3) forest(m3, pscale = 100) forest(m4) forest(m5) ## End(Not run) # Do not back transform results, e.g. print logit transformed # proportions if sm = "PLOGIT" and store old settings # oldset < settings.meta(backtransf = FALSE) # m6 < metaprop(4:1, c(10, 20, 30, 40), method = "Inverse") m7 < update(m6, sm = "PAS") m8 < update(m6, sm = "PRAW") m9 < update(m6, sm = "PLN") m10 < update(m6, sm = "PFT") # forest(m6) ## Not run: forest(m7) forest(m8) forest(m8, pscale = 100) forest(m9) forest(m10) ## End(Not run) # Use old settings # settings.meta(oldset) # Examples with zero events # m1 < metaprop(c(0, 0, 10, 10), rep(100, 4), method = "Inverse") m2 < metaprop(c(0, 0, 10, 10), rep(100, 4), incr = 0.1, method = "Inverse") # m1 m2 # ## Not run: forest(m1) forest(m2) ## End(Not run) # Example from Miller (1978): # death < c(3, 6, 10, 1) animals < c(11, 17, 21, 6) # m3 < metaprop(death, animals, sm = "PFT") forest(m3) # Data examples from Newcombe (1998) #  apply various methods to estimate confidence intervals for # individual studies # event < c(81, 15, 0, 1) n < c(263, 148, 20, 29) # m1 < metaprop(event, n, method.ci = "SA", method = "Inverse") m2 < update(m1, method.ci = "SACC") m3 < update(m1, method.ci = "WS") m4 < update(m1, method.ci = "WSCC") m5 < update(m1, method.ci = "CP") # lower < round(rbind(NA, m1$lower, m2$lower, NA, m3$lower, m4$lower, NA, m5$lower), 4) upper < round(rbind(NA, m1$upper, m2$upper, NA, m3$upper, m4$upper, NA, m5$upper), 4) # tab1 < data.frame( scen1 = meta:::formatCI(lower[, 1], upper[, 1]), scen2 = meta:::formatCI(lower[, 2], upper[, 2]), scen3 = meta:::formatCI(lower[, 3], upper[, 3]), scen4 = meta:::formatCI(lower[, 4], upper[, 4]) ) names(tab1) < c("r=81, n=263", "r=15, n=148", "r=0, n=20", "r=1, n=29") row.names(tab1) < c("Simple", " SA", " SACC", "Score", " WS", " WSCC", "Binomial", " CP") tab1[is.na(tab1)] < "" # Newcombe (1998), Table I, methods 15: tab1 # Same confidence interval, i.e. unaffected by choice of summary # measure # print(metaprop(event, n, method.ci = "WS", method = "Inverse"), ma = FALSE) print(metaprop(event, n, sm = "PLN", method.ci = "WS"), ma = FALSE) print(metaprop(event, n, sm = "PFT", method.ci = "WS"), ma = FALSE) print(metaprop(event, n, sm = "PAS", method.ci = "WS"), ma = FALSE) print(metaprop(event, n, sm = "PRAW", method.ci = "WS"), ma = FALSE) # Different confidence intervals as argument sm = "NAsm" # print(metaprop(event, n, method.ci = "NAsm", method = "Inverse"), ma = FALSE) print(metaprop(event, n, sm = "PLN", method.ci = "NAsm"), ma = FALSE) print(metaprop(event, n, sm = "PFT", method.ci = "NAsm"), ma = FALSE) print(metaprop(event, n, sm = "PAS", method.ci = "NAsm"), ma = FALSE) print(metaprop(event, n, sm = "PRAW", method.ci = "NAsm"), ma = FALSE) # Different confidence intervals as argument backtransf = FALSE. # Accordingly, method.ci = "NAsm" used internally. # print(metaprop(event, n, method.ci = "WS", method = "Inverse"), ma = FALSE, backtransf = FALSE) print(metaprop(event, n, sm = "PLN", method.ci = "WS"), ma = FALSE, backtransf = FALSE) print(metaprop(event, n, sm = "PFT", method.ci = "WS"), ma = FALSE, backtransf = FALSE) print(metaprop(event, n, sm = "PAS", method.ci = "WS"), ma = FALSE, backtransf = FALSE) print(metaprop(event, n, sm = "PRAW", method.ci = "WS"), ma = FALSE, backtransf = FALSE) # Same results (printed on original and log scale, respectively) # print(metaprop(event, n, sm = "PLN", method.ci = "NAsm"), ma = FALSE) print(metaprop(event, n, sm = "PLN"), ma = FALSE, backtransf = FALSE) # Results for first study (on log scale) round(log(c(0.3079848, 0.2569522, 0.3691529)), 4) # Print results as events per 1000 observations # print(metaprop(6:8, c(100, 1200, 1000), method = "Inverse"), pscale = 1000, digits = 1)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.