fisher: Fisher's Method

View source: R/fisher.r

fisherR Documentation

Fisher's Method

Description

Function to carry out Fisher's method.\loadmathjax

Usage

fisher(p, adjust = "none", R, m,
       size = 10000, threshold, side = 2, batchsize, nearpd = TRUE, ...)

Arguments

p

vector of length \mjseqnk with the (one- or two-sided) p-values to be combined.

adjust

character string to specify an adjustment method to account for dependence. The default is "none", in which case no adjustment is applied. Methods "nyholt", "liji", "gao", or "galwey" are adjustments based on an estimate of the effective number of tests (see meff). Adjustment method "empirical" uses an empirically-derived null distribution using pseudo replicates. Finally, method "generalized" uses a generalization of Fisher's method based on multivariate theory. See ‘Details’.

R

a \mjeqnk \times kk * k symmetric matrix that reflects the dependence structure among the tests. Must be specified if adjust is set to something other than "none". See ‘Details’.

m

optional scalar (between 1 and \mjseqnk) to manually specify the effective number of tests (instead of estimating it via one of the methods described above).

size

size of the empirically-derived null distribution. Can also be a numeric vector of sizes, in which case a stepwise algorithm is used. This (and the following arguments) are only relevant when adjust = "empirical".

threshold

numeric vector to specify the significance thresholds for the stepwise algorithm (only relevant when size is a vector).

side

scalar to specify the sidedness of the \mjseqnp-values that are used to simulate the null distribution (2, by default, for two-sided tests; 1 for one-sided tests).

batchsize

optional scalar to specify the batch size for generating the null distribution. When unspecified (the default), this is done in a single batch.

nearpd

logical indicating if a negative definite R matrix should be turned into the nearest positive definite matrix (only relevant when adjust = "empirical" or adjust = "generalized").

...

other arguments.

Details

Fisher's Method

By default (i.e., when adjust = "none"), the function applies Fisher's method to the \mjseqnp-values (Fisher, 1932). Letting \mjseqnp_1, p_2, ..., p_k denote the individual (one- or two-sided) \mjseqnp-values of the \mjseqnk hypothesis tests to be combined, the test statistic is then computed with \mjdeqnX^2 = -2 \sum_i = 1^k \ln(p_i).X^2 = -2 sum_i=1^k ln(p_i). Under the joint null hypothesis, the test statistic follows a chi-square distribution with \mjseqn2k degrees of freedom which is used to compute the combined \mjseqnp-value.

Fisher's method assumes that the \mjseqnp-values to be combined are independent. If this is not the case, the method can either be conservative (not reject often enough) or liberal (reject too often), depending on the dependence structure among the tests. In this case, one can adjust the method to account for such dependence (to bring the Type I error rate closer to some desired nominal significance level).

Adjustment Based on the Effective Number of Tests

When adjust is set to "nyholt", "liji", "gao" or "galwey", Fisher's method is adjusted based on an estimate of the effective number of tests (see meff for details on these methods for estimating the effective number of tests). In this case, argument R needs to be set to a matrix that reflects the dependence structure among the tests.

There is no general solution for constructing such a matrix, as this depends on the type of test that generated the \mjseqnp-values and the sidedness of these tests. If the \mjseqnp-values are obtained from tests whose test statistics can be assumed to follow a multivariate normal distribution and a matrix is available that reflects the correlations among the test statistics, then the mvnconv function can be used to convert this correlation matrix into the correlations among the (one- or two-sided) \mjseqnp-values, which can then be passed to the R argument. See ‘Examples’.

Once the effective number of tests, \mjseqnm, is estimated based on R using one of the four methods described above, the test statistic of Fisher's method can be modified with \mjdeqn\tildeX^2 = \fracmk \times X^2X'^2 = m/k * X^2 which is then assumed to follow a chi-square distribution with \mjseqn2m degrees of freedom.

Alternatively, one can also directly specify the effective number of tests via the m argument (e.g., if some other method not implemented in the poolr package is used to estimate the effective number of tests). Argument R is then irrelevant and doesn't need to be specified.

Adjustment Based on an Empirically-Derived Null Distribution

When adjust = "empirical", the combined \mjseqnp-value is computed based on an empirically-derived null distribution using pseudo replicates (using the empirical function). This is appropriate if the test statistics that generated the \mjseqnp-values to be combined can be assumed to follow a multivariate normal distribution and a matrix is available that reflects the correlations among the test statistics (which is specified via the R argument). In this case, test statistics are repeatedly simulated from a multivariate normal distribution under the joint null hypothesis, converted into one- or two-sided \mjseqnp-values (depending on the side argument), and Fisher's method is applied. Repeating this process size times yields a null distribution based on which the combined \mjseqnp-value can be computed, or more precisely, estimated, since repeated applications of this method will yield (slightly) different results. To obtain a stable estimate of the combined \mjseqnp-value, size should be set to a large value (the default is 10000, but this can be increased for a more precise estimate). If we consider the combined \mjseqnp-value an estimate of the ‘true’ combined \mjseqnp-value that would be obtained for a null distribution of infinite size, we can also construct a 95% (pseudo) confidence interval based on a binomial distribution.

If batchsize is unspecified, the null distribution is simulated in a single batch, which requires temporarily storing a matrix with dimensions [size,k]. When size*k is large, allocating the memory for this matrix might not be possible. Instead, one can specify a batchsize value, in which case a matrix with dimensions [batchsize,k] is repeatedly simulated until the desired size of the null distribution has been obtained.

One can also specify a vector for the size argument, in which case one must also specify a corresponding vector for the threshold argument. In that case, a stepwise algorithm is used that proceeds as follows. For j = 1, ..., length(size),

  1. estimate the combined \mjseqnp-value based on size[j]

  2. if the combined \mjseqnp-value is \mjseqn\ge than threshold[j], stop (and report the combined \mjseqnp-value), otherwise go back to 1.

By setting size to increasing values (e.g., size = c(1000, 10000, 100000)) and threshold to decreasing values (e.g., threshold = c(.10, .01, 0)), one can quickly obtain a fairly accurate estimate of the combined \mjseqnp-value if it is far from significant (e.g., \mjseqn\ge .10), but hone in on a more accurate estimate for a combined \mjseqnp-value that is closer to 0. Note that the last value of threshold should be 0 (and is forced to be inside of the function), so that the algorithm is guaranteed to terminate (hence, one can also leave out the last value of threshold, so threshold = c(.10, .01) would also work in the example above). One can also specify a single threshold (which is replicated as often as necessary depending on the length of size).

Adjustment Based on Multivariate Theory

When adjust = "generalized", Fisher's method is computed based on a Satterthwaite approximation that accounts for the dependence among the tests, assuming that the test statistics that generated the \mjseqnp-values follow a multivariate normal distribution. In that case, R needs to be set equal to a matrix that contains the covariances among the \mjeqn-2 \ln(p_i)-2 ln(p_i) values. If a matrix is available that reflects the correlations among the test statistics, this can be converted into the required covariance matrix using the mvnconv function. See ‘Examples’.

This generalization of Fisher's method is sometimes called Brown's method, based on Brown (1975), although the paper only describes the method for combining one-sided p-values. Both one- and two-sided versions of Brown's method are implemented in poolr.

Value

An object of class "poolr". The object is a list containing the following components:

p

combined \mjseqnp-value.

ci

confidence interval for the combined \mjseqnp-value (only when adjust = "empirical"; otherwise NULL).

k

number of \mjseqnp-values that were combined.

m

estimate of the effective number of tests (only when adjust is one of "nyholt", "liji", "gao" or "galwey"; otherwise NULL).

adjust

chosen adjustment method.

statistic

value of the (adjusted) test statistic.

fun

name of calling function.

Note

The methods underlying adjust = "empirical" and adjust = "generalized" assume that the test statistics that generated the \mjseqnp-values to be combined follow a multivariate normal distribution. Hence, the matrix specified via R must be positive definite. If it is not and nearpd = TRUE, it will be turned into one (based on Higham, 2002, and a slightly simplified version of nearPD from the Matrix package).

Author(s)

Ozan Cinar ozancinar86@gmail.com
Wolfgang Viechtbauer wvb@wvbauer.com

References

Brown, M. B. (1975). 400: A method for combining non-independent, one-sided tests of significance. Biometrics, 31(4), 987–992.

Cinar, O. & Viechtbauer, W. (2022). The poolr package for combining independent and dependent p values. Journal of Statistical Software, 101(1), 1–42. ⁠https://doi.org/10.18637/jss.v101.i01⁠

Fisher, R. A. (1932). Statistical Methods for Research Workers (4th ed.). Edinburgh: Oliver and Boyd.

Higham, N. J. (2002). Computing the nearest correlation matrix: A problem from finance. IMA Journal of Numerical Analysis, 22(3), 329–343.

Examples

# copy p-values and LD correlation matrix into p and r
# (see help(grid2ip) for details on these data)
p <- grid2ip.p
r <- grid2ip.ld

# apply Fisher's method
fisher(p)

# use mvnconv() to convert the LD correlation matrix into a matrix with the
# correlations among the (two-sided) p-values assuming that the test
# statistics follow a multivariate normal distribution with correlation
# matrix r (note: 'side = 2' by default in mvnconv())
mvnconv(r, target = "p", cov2cor = TRUE)[1:5,1:5] # show only rows/columns 1-5

# adjustment based on estimates of the effective number of tests
fisher(p, adjust = "nyholt", R = mvnconv(r, target = "p", cov2cor = TRUE))
fisher(p, adjust = "liji",   R = mvnconv(r, target = "p", cov2cor = TRUE))
fisher(p, adjust = "gao",    R = mvnconv(r, target = "p", cov2cor = TRUE))
fisher(p, adjust = "galwey", R = mvnconv(r, target = "p", cov2cor = TRUE))

# setting argument 'm' manually
fisher(p, m = 12)

# adjustment based on an empirically-derived null distribution (setting the
# seed for reproducibility)
set.seed(1234)
fisher(p, adjust = "empirical", R = r)

# generate the empirical distribution in batches of size 100
fisher(p, adjust = "empirical", R = r, batchsize = 100)

# using the stepwise algorithm
fisher(p, adjust = "empirical", R = r, size = c(1000, 10000, 100000), threshold = c(.10, .01))

# use mvnconv() to convert the LD correlation matrix into a matrix with the
# covariances among the (two-sided) '-2ln(p_i)' values assuming that the
# test statistics follow a multivariate normal distribution with correlation
# matrix r (note: 'side = 2' by default in mvnconv())
mvnconv(r, target = "m2lp")[1:5,1:5] # show only rows/columns 1-5

# adjustment based on generalized method
fisher(p, adjust = "generalized", R = mvnconv(r, target = "m2lp"))

# when using mvnconv() inside fisher() with adjust = "generalized", the
# 'target' argument is automatically set and doesn't need to be specified
fisher(p, adjust = "generalized", R = mvnconv(r))

ozancinar/poolR documentation built on Oct. 1, 2024, 12:28 a.m.