permutest | R Documentation |
Function to carry out permutation tests for objects of class "rma.uni"
and "rma.ls"
. \loadmathjax
permutest(x, ...)
## S3 method for class 'rma.uni'
permutest(x, exact=FALSE, iter=1000, permci=FALSE,
progbar=TRUE, digits, control, ...)
## S3 method for class 'rma.ls'
permutest(x, exact=FALSE, iter=1000,
progbar=TRUE, digits, control, ...)
x |
an object of class |
exact |
logical to specify whether an exact permutation test should be carried out (the default is |
iter |
integer to specify the number of iterations for the permutation test when not doing an exact test (the default is |
permci |
logical to specify whether permutation-based confidence intervals (CIs) should also be constructed (the default is |
progbar |
logical to specify whether a progress bar should be shown (the default is |
digits |
optional integer to specify the number of decimal places to which the printed results should be rounded. If unspecified, the default is to take the value from the object. |
control |
list of control values for numerical comparisons ( |
... |
other arguments. |
For models without moderators, the permutation test is carried out by permuting the signs of the observed effect sizes or outcomes. The (two-sided) p-value of the permutation test is then equal to the proportion of times that the absolute value of the test statistic under the permuted data is as extreme or more extreme than under the actually observed data. See Follmann and Proschan (1999) for more details.
For models with moderators, the permutation test is carried out by permuting the rows of the model matrix (i.e., \mjseqnX). The (two-sided) p-value for a particular model coefficient is then equal to the proportion of times that the absolute value of the test statistic for the coefficient under the permuted data is as extreme or more extreme than under the actually observed data. Similarly, for the omnibus test, the p-value is the proportion of times that the test statistic for the omnibus test is as extreme or more extreme than the actually observed one. See Higgins and Thompson (2004) and Viechtbauer et al. (2015) for more details.
If exact=TRUE
, the function will try to carry out an exact permutation test. An exact permutation test requires fitting the model to each possible permutation. However, the number of possible permutations increases rapidly with the number of outcomes/studies (i.e., \mjseqnk). For models without moderators, there are \mjseqn2^k possible permutations of the signs. Therefore, for \mjseqnk=5, there are 32 possible permutations, for \mjseqnk=10, there are already 1024, and for \mjseqnk=20, there are over one million such permutations.
For models with moderators, the increase in the number of possible permutations may be even more severe. The total number of possible permutations of the model matrix is \mjseqnk!. Therefore, for \mjseqnk=5, there are 120 possible permutations, for \mjseqnk=10, there are 3,628,800, and for \mjseqnk=20, there are over \mjeqn10^1810^18 permutations of the model matrix.
Therefore, going through all possible permutations may become infeasible. Instead of using an exact permutation test, one can set exact=FALSE
(which is also the default). In that case, the function approximates the exact permutation-based p-value(s) by going through a smaller number (as specified by the iter
argument) of random permutations. Therefore, running the function twice on the same data can yield (slightly) different p-values. Setting iter
sufficiently large ensures that the results become stable. For full reproducibility, one can also set the seed of the random number generator before running the function (see ‘Examples’). Note that if exact=FALSE
and iter
is actually larger than the number of iterations required for an exact permutation test, then an exact test will automatically be carried out.
For models with moderators, the exact permutation test actually only requires fitting the model to each unique permutation of the model matrix. The number of unique permutations will be smaller than \mjseqnk! when the model matrix contains recurring rows. This may be the case when only including categorical moderators (i.e., factors) in the model or when any quantitative moderators included in the model can only take on a small number of unique values. When exact=TRUE
, the function therefore uses an algorithm to restrict the test to only the unique permutations of the model matrix, which may make the use of the exact test feasible even when \mjseqnk is large.
One can also set exact="i"
in which case the function just returns the number of iterations required for an exact permutation test.
When using random permutations, the function ensures that the very first permutation will always correspond to the original data. This avoids p-values equal to 0.
When permci=TRUE
, the function also tries to obtain permutation-based confidence intervals (CIs) of the model coefficient(s). This is done by shifting the observed effect sizes or outcomes by some amount and finding the most extreme values for this amount for which the permutation-based test would just lead to non-rejection. The calculation of such CIs is computationally expensive and may take a long time to complete. For models with moderators, one can also set permci
to a vector of indices to specify for which coefficient(s) a permutation-based CI should be obtained. When the algorithm fails to determine a particular CI bound, it will be shown as NA
in the output.
The function also works with location-scale models (see rma.uni
for details on such models). Permutation tests will then be carried out for both the location and scale parts of the model. However, note that permutation-based CIs are not available for location-scale models.
An object of class "permutest.rma.uni"
. The object is a list containing the following components:
pval |
p-value(s) based on the permutation test. |
QMp |
p-value for the omnibus test of moderators based on the permutation test. |
zval.perm |
values of the test statistics of the coefficients under the various permutations. |
b.perm |
the model coefficients under the various permutations. |
QM.perm |
the test statistic of the omnibus test of moderators under the various permutations. |
ci.lb |
lower bound of the confidence intervals for the coefficients (permutation-based when |
ci.ub |
upper bound of the confidence intervals for the coefficients (permutation-based when |
... |
some additional elements/values are passed on. |
The results are formatted and printed with the print
function. One can also use coef
to obtain the table with the model coefficients, corresponding standard errors, test statistics, p-values, and confidence interval bounds. The permutation distribution(s) can be plotted with the plot
function.
The p-values obtained with permutation tests cannot reach conventional levels of statistical significance (i.e., \mjseqnp \le .05) when \mjseqnk is very small. In particular, for models without moderators, the smallest possible (two-sided) p-value is .0625 when \mjseqnk=5 and .03125 when \mjseqnk=6. Therefore, the permutation test is only able to reject the null hypothesis at \mjseqn\alpha=.05 when \mjseqnk is at least equal to 6. For models with moderators, the smallest possible (two-sided) p-value for a particular model coefficient is .0833 when \mjseqnk=4 and .0167 when \mjseqnk=5 (assuming that each row in the model matrix is unique). Therefore, the permutation test is only able to reject the null hypothesis at \mjseqn\alpha=.05 when \mjseqnk is at least equal to 5. Consequently, permutation-based CIs can also only be obtained when \mjseqnk is sufficiently large.
When the number of permutations required for the exact test is so large as to be essentially indistinguishable from infinity (e.g., factorial(200)
), the function will terminate with an error.
Determining whether a test statistic under the permuted data is as extreme or more extreme than under the actually observed data requires making >=
or <=
comparisons. To avoid problems due to the finite precision with which computers generally represent numbers (see this FAQ for details), the function uses a numerical tolerance (control
argument comptol
, which is set equal to .Machine$double.eps^0.5
by default) when making such comparisons (e.g., instead of sqrt(3)^2 >= 3
, which may evaluate to FALSE
, we use sqrt(3)^2 >= 3 - .Machine$double.eps^0.5
, which should evaluate to TRUE
).
When obtaining permutation-based CIs, the function makes use of uniroot
. By default, the desired accuracy is set equal to .Machine$double.eps^0.25
and the maximum number of iterations to 100
. The desired accuracy and the maximum number of iterations can be adjusted with the control
argument (i.e., control=list(tol=value, maxiter=value)
). Also, the interval searched for the CI bounds may be too narrow, leading to NA
for a bound. In this case, one can try setting control=list(distfac=value)
with a value larger than 1 to extend the interval (the value indicating a multiplicative factor by which to extend the width of the interval searched) or control=list(extendInt="yes")
to allow uniroot
to extend the interval dynamically (in which case it can happen that a bound may try to drift towards \mjeqn\pm \infty± infinity).
Wolfgang Viechtbauer wvb@metafor-project.org https://www.metafor-project.org
Follmann, D. A., & Proschan, M. A. (1999). Valid inference in random effects meta-analysis. Biometrics, 55(3), 732–737. https://doi.org/10.1111/j.0006-341x.1999.00732.x
Good, P. I. (2009). Permutation, parametric, and bootstrap tests of hypotheses (3rd ed.). New York: Springer.
Higgins, J. P. T., & Thompson, S. G. (2004). Controlling the risk of spurious findings from meta-regression. Statistics in Medicine, 23(11), 1663–1682. https://doi.org/10.1002/sim.1752
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. https://doi.org/10.18637/jss.v036.i03
Viechtbauer, W., López-López, J. A., Sánchez-Meca, J., & Marín-Martínez, F. (2015). A comparison of procedures to test for moderators in mixed-effects meta-regression models. Psychological Methods, 20(3), 360–374. https://doi.org/10.1037/met0000023
Viechtbauer, W., & López-López, J. A. (2022). Location-scale models for meta-analysis. Research Synthesis Methods. 13(6), 697–715. https://doi.org/10.1002/jrsm.1562
rma.uni
for the function to fit models for which permutation tests can be conducted.
print
and plot
for the print and plot methods and coef
for a method to extract the model results table.
### calculate log risk ratios and corresponding sampling variances
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
### random-effects model
res <- rma(yi, vi, data=dat)
res
## Not run:
### permutation test (approximate and exact)
set.seed(1234) # for reproducibility
permutest(res)
permutest(res, exact=TRUE)
## End(Not run)
### mixed-effects model with two moderators (absolute latitude and publication year)
res <- rma(yi, vi, mods = ~ ablat + year, data=dat)
res
### number of iterations required for an exact permutation test
permutest(res, exact="i")
## Not run:
### permutation test (approximate only; exact not feasible)
set.seed(1234) # for reproducibility
permres <- permutest(res, iter=10000)
permres
### plot of the permutation distribution for absolute latitude
### dashed horizontal line: the observed value of the test statistic (in both tails)
### black curve: standard normal density (theoretical reference/null distribution)
### blue curve: kernel density estimate of the permutation distribution
### note: the tail area under the permutation distribution is larger
### than under a standard normal density (hence, the larger p-value)
plot(permres, beta=2, lwd=c(2,3,3,4), xlim=c(-5,5))
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.