knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )
The bayesWilcoxTest R-package implements a Bayesian Alternative to the classical Wilcoxon Rank test and is intended as an addition to the BayesianFirstAid package [@baath2014bayesian]. Following @gelman2014bayesian, p. 97, the ranks of the data are transformed to the quantiles of a standard Gaussian distribution. This allows modeling the transformed data as a Gaussian distribution via Bayesian methods with uninformative priors. The resulting posterior distribution can be interpreted similarly to the classical hypothesis test, but provides more detailed information and has a more straightforward interpretation. The package implements a paired and independent sample test.
In a frequentist framework, hypothesis testing is performed in a way that is distinct from other forms of inference, such as parameter estimation. In a frequentist hypothesis test, the compability of the observed data with a given (null-)hypothesis on model parameters is assessed, without any regard to which model parameter best fits the given data. Parameter estimation, on the other hand, is concerned with finding the parameter estimate most compatible with the given data. In probability notation, frequentist hypothesis tests evaluate $P(y|\theta_0)$, known as the p-value - i.e. the probability of obtaining the observed data $y$ under the assumption that the model under the null hypothesis, with parameters $\theta_0$, is the true process by which the data are generated. Frequentist maximum likelihood parameter estimation operates by finding the value of $\theta$ for which the likelihood ${\cal L}(\theta|y)$ given the observed data is maximal, which corresponds to the parameter with a p-value of $1$, or $P(y|\theta) = 1$.
From a Bayesian perspective, parameter estimates and hypothesis tests on parameters both involve the same estimation procedure. The outcome of Bayesian model estimation is a full probability distribution of the parameters in question, the posterior distribution $P(\theta|y)$. This posterior distribution can directly be employed to test hypotheses on the parameters at hand.
One way of doing so are Bayes factors, which compare the posterior distributions of two models, usually under a null and an alternative hypothesis. The ratio of both posterior distributions indicates which model can be considered likely given the observed data. A major difference to frequentist hypothesis tests is that evidence both for and against the null hypothesis is considered; thus, the equivalent of "accepting" the null hypothesis is possible. The drawback of using Bayes Factors for hypothesis testing is that they can only be usefully employed for clearly discrete alternatives of parameter values, as noted by @gelman2014bayesian, p. 182. For continuous models with infinitely many parameter values between alternative hypotheses, the use of Bayes factors is not helpful.
Another way of Bayesian hypothesis testing is presented by
@kruschke2013bayesian, who describes a Bayesian alternative to the
frequentist t-test. This approach was extended to different types of hypothesis
tests in the BayesianFirstAid R-package [@baath2014bayesian], and this the
approach followed for the bayesWilcoxTest package. Kruschke's BEST
("Bayesian Estimation Supersedes the t Test") procedure for the two-sample test
models the data as Student's t distributions with uninformative priors. The
resulting posterior distributions for the $\mu$ parameter of both groups give
the probabilies of the means exceeding or falling below any given value, and
the probability of their differences falling below or above zero give the
probability that the mean of the first group is higher or lower than the mean of
the second group.
This way of Bayesian hypothesis testing can be illustrated using the
BayesianFirstAid-alternative to the frequentist binomial test procedure.
require(BayesianFirstAid)
Setting up an example data set of $68$ successes out of $92$ trials, the frequentist test of the null hypothesis that the rate of success is $0.75$ or greater gives a p-value of $0.44$. Applying a standardly employed cutoff value of 0.05, the test cannot reject the null hypothesis.
#Setting up example data data <- c(68, 24) #Classical test binom.test(data, p = 3/4, alternative = "l")
BayesianFirstAid's binomial test gives a full posterior distribution of the parameter $\theta$, the rate of success. While a significant portion, $37\%$, of the posterior mass lies above $0.75$, the test indicates that $\theta$ is more likely to lie below $0.75$, with a probability of $63\%$. This additional information, as well as the fact that the posterior densities can directly be interpreted as the probability of the parameter falling into a certain range, are advantages of BEST-style Bayesian hypothesis tests.
#Bayesian Alternative bayesBinom <- bayes.binom.test(data, p = 3/4) print(bayesBinom) plot(bayesBinom)
Hypothesis tests such as the frequentist t-Test and BEST are based on the assumption that the data follow a Gaussian (for the frequentist test) or a Student's t distribution (for BEST). In addition, the scale of measurement of the data needs to be an interval scale, which excludes data on an ordinal scale such as grades or ratings. For cases in which these assumptions are violated, non-parametric alternatives exist: the Wilcoxon Signed Rank test for paired samples, due to @wilcoxon1945individual, and the independent-sample Wilcoxon Rank Sum test, also known as the Mann-Whitney test, first proposed by @wilcoxon1945individual and further developed by @mann1947test.
The Wilcoxon Rank test attains robustness against outliers by transforming the data to their ranks. The means of two samples can thus be compared by the relative sizes of their data points, without reference to their absolute magnitudes. Thus, the test does not assume that the data are normally distributed; however, the assumption that both samples are drawn from the same distribution is made.
Under the null hypothesis of both samples being drawn from a population with the same mean, the exact (discrete) distribution of the ranks depending on the sample size can be obtained. The procedures are different for the paired sample Wilcoxon Signed Rank test and the independent sample Mann-Whitney test.
Assuming that the two groups $x$ and $y$ were obtained from independent samples, the Mann-Whitney test first assigns a rank to each data point reflecting their relative position in the combined data from $x$ and $y$, resulting in the ranks $r_x$ and $r_y$. Since the overall ranks are the same for any dataset of a given size $n$, the sum of the ranks $r_x$ and $r_y$ can be compared to the sum of ranks to be expected if the overall ranks were equally distributed among $x$ and $y$, which corresponds to the null hypothesis of $x$ and $y$ having the same mean. The expected sum for a given group is equal to the average rank assigned for a given sample size times the number of observations in the group. For a sample of $n_x = 8$ values in the first sample and $n_y = 4$ values in the second sample for combined sample size of $n= n_x + n_y = 12$, the total sum of ranks is equal to $78$, with an average rank of $6.5$. Under the null hypothesis, the sum of the ranks $r_x$ would thus be equal to $52$, with $\sum r_y = 26$.
The paired sample Wilcoxon Signed Rank test operates by first taking the differences between the two samples, $d = x - y$, and then assigning ranks to these differences, $r_d$. Depending on whether the respective value of $d$ is positive or negative, the $r_d$s are assigned a positive or negative sign, and the sum of these signed ranks is computed. Under the null hypothesis, the sum of the signed ranks is expected to be zero.
In the following, a BEST-style Bayesian testing procedure as an alternative to the frequentist Wilcoxon Rank test will be presented.
The present section describes the estimation procedure for a Bayesian Alternative to the frequentist Wilcoxon Rank test, which is implemented by the bayesWilcoxTest package.
The Bayesian Wilcoxon test follows an estimation procedure outlined in @gelman2014bayesian and further described in a blog post by Andrew @gelman2015dont. As in the frequentist Wilcoxon procedure, the data are substituted by their ranks in the combined sample. Subsequently, the ranks are transformed to quantiles of a standard Gaussian distribution by applying a Gaussian quantile function to the sequence
$$\frac{1}{2n}, \frac{3}{2n}, ..., \frac{2i-1}{2n}, ..., \frac{2n - 1}{2n}$$
which results in a discrete distribution which approaches a standard Gaussian distribution as $n$ increases. A similar approach has been proposed by @van1953neuer, who suggested using a sequence of equidistant points instead of the sequence above. The differences between the resulting transformed ranks for two groups can in turn be modeled as a Gaussian distribution. As noted in @gelman2014bayesian, in this procedure, the difference between the two group means translates to a mean distance in the quantiles of the combined data.
Via Bayesian estimation, a Gaussian model with uninformative priors can be estimated in order to obtain a BEST-style hypothesis test. The properties of informative priors for this distribution are discussed in the following section.
For the two-sample case, both groups are modeled as independent Gaussian distributions. Thus, prior distributions for the mean and variance of both groups are required. In order to keep these priors uninformative, uniform distributions are employed, and since the distribution of the transformed ranks is known and depends only on the sample size, the upper and lower limits of the prior distributions can be derived.
For the variance, the lower limit is zero, since it is possible that all data from one group have the same value, resulting in the assignment of tied ranks which are all equal and thus a variance of zero. The maximum variance is realized if one group consists only of two data points, which at the same time are the highest and lowest value of both groups combined. The transformed ranks assigned to these two values depend on the overall sample size. The smallest transformed rank is given by $Z_{min} = \Phi^{-1}(\frac{1}{2n})$, the largest by $Z_{max} = \Phi^{-1}(\frac{2n-1}{2n})$. Due to the symmetry of the Gaussian quantile function, $Z_{min} = -Z_{max}$, which implies a zero mean for the two values. The maximum variance is thus given by
$$\sigma^2_{max}(n) = \frac{1}{2}(\Phi^{-1}(\frac{2n-1}{2n}))^2 + \frac{1}{2}(\Phi^{-1}(\frac{2n-1}{2n}))^2 = (\Phi^{-1}(\frac{2n-1}{2n}))^2$$
which results in a maximum standard deviation of $\sigma_{max}(n) = \Phi^{-1}(\frac{2n-1}{2n}) = -\Phi^{-1}(\frac{1}{2n})$.
The lowest possible mean for one of the two groups is realized if this group again consists only of two data points, which are the two lowest values of the overall data. Their transformed ranks would then be given by $Z_{min1} = \Phi^{-1}(\frac{1}{2n})$ and $Z_{min2} = \Phi^{-1}(\frac{3}{2n})$, resulting in a lowest possible mean of
$$\mu_{min}(n) = \frac{1}{2}(\Phi^{-1}(\frac{1}{2n}) + \Phi^{-1}(\frac{3}{2n}))$$
and, due to symmetry of the quantile function, a highest possible mean of
$$\mu_{max}(n) = -\frac{1}{2}(\Phi^{-1}(\frac{1}{2n}) + \Phi^{-1}(\frac{3}{2n}))$$
For the paired sample test, the single target posterior distribution is the difference between the transformed ranks of both groups. Again, uniformly distributed priors are employed for the mean and variance.
In the paired sample case, the variance depends on the ordering of the
observations in the two groups.
A value of zero can be realized if all values in the first
group are smaller than all values in the second group, and both groups are
sorted in ascending order. The difference in transformed ranks would then be
the same for all pairs of values, resulting in a smallest possible variance of
zero.
As the variance depends on the ordering of the observations, the largest
possible variance is more difficult to estimate exactly. The maximum variance of
the differences in transformed ranks is larger than the variance of the
individual transformed ranks, thus, a larger upper limit than in the two-sample
case is required.
bayesWilcoxTest uses the range of the transformed ranks as a rough estimate
of the upper limit of the standard deviation, since the standard deviation
cannot exceed this value here. The range of the transformed ranks is, due to
symmetry, twice the largest of the transformed rank values, i.e.
$$ \sigma_{max}(n) = 2\Phi^{-1}(\frac{2n-1}{2n}) = -2\Phi^{-1}(\frac{1}{2n})$$
The largest and smallest possible mean of the differences in transformed ranks can be obtained in a more straightforward fashion. The smallest mean will be realized if all elements in $x$ are larger than all values in $y$, and vice versa - due to symmetry, $\mu_{min} = -\mu_{max}$. Calculating the possible means for $x > y$ for different sample sizes reveals that $\mu_{max}$ converges to a value just below $1.6$:
pairDiff_means <- function(N) { #create x and y so that x > y x <- seq(1.1, 2.1, by = (1/(N - 1))) y <- seq(0, 1, by = (1/(N - 1))) n <- length(x) + length(y) ranks <- rank(c(x, y)) # Replace by ranks #Tranformed Ranks seqQ <- seq(1, 2*n - 1, by = 2)/(2*n) zRanks <- qnorm(seqQ[ranks]) zRanksX <- zRanks[1:length(x)] zRanksY <- zRanks[-(1:length(x))] mean_diff <- mean(zRanksX - zRanksY) return(mean_diff) } means <- c() for (i in 2:500) { means <- c(means, pairDiff_means(i)) } plot(means, type = "l", ylab = "mu_max", xlab = "N")
The upper and lower limits of the unifrom prior distribution for the mean of transformed differences are thus $\mu_{min} = -1.6$ and $\mu_{max} = 1.6$.
For both the paired and independent sample case, initial values for the MCMC are set to $\mu = 0$ and $\sigma = 1$, i.e., the values which are obtained under the null hypothesis.
For the paired-sample test of the means of two groups $x$ and $y$, the procedure consists of first assigning ranks to the combined data, from $1$ to $n = n_x + n_y$. These ranks are then transformed to quantiles of the standard Gaussian. The test models the differences of these transformed ranks as a Gaussian distribution, with uniform priors for mean and variance.
knitr::include_graphics("oneSampleWilcoxDiagram.svg")
The two-sample test assigns transformed ranks as in the paired sample case, but models each group independently as a Gaussian with uniform priors. The difference between the two groups is obtained as the difference between the posterior distributions for both groups.
knitr::include_graphics("twoSampleWilcoxDiagram.svg")
The bayesWilcoxTest package is intended as an addition to the
BayesianFirstAid [@baath2014bayesian] package. As such, bayesWilcoxTest
follows the structure of BayesianFirstAid functions, makes use of the
utility functions from BayesianFirstAid, and relies on modifying
BayesianFirstAid code for its S3 functions.
In addition, the package's main function, bayes.wilcox.test, was calibrated
in order to allow usage identical to R's built-in wilcox.test function.
The package can be obtained from github and installed using the devtools package (https://CRAN.R-project.org/package=devtools) via
require(devtools) install_github("joereinhardt/BayesianFirstAid-Wilcoxon")
The package exports a main function, bayes.wilcox.test, for performing the paired and independent sample Bayesian Wilcoxon along with several S3 functions which can be applied to the returned objects.
bayes.wilcox.test. Performs the Bayesian Wilcoxon, as a paired or independent sample test, depending on the arguments used in the function call. First, some preliminary checks for consistency of the data are performed. The function then applies the inverse Gaussian rank transformation to the data, and estimates the paired or independent sample model using the RJAGS package (https://cran.r-project.org/package=rjags). The function returns a list of class "bayes_paired_wilcox_test" or "bayes_two_sample_wilcox_test", respectively, containing information on the estimated model and samples from the posterior distribution generated by Markov Chain Monte Carlo (MCMC).
print.bayes_paired_wilcox_test and print.bayes_two_sample_wilcox_test. Prints out concise information on the Bayesian Wilcox test.
summary.bayes_paired_wilcox_test and summary.bayes_two_sample_wilcox_test. Prints out a more detailed summary of the test.
plot.bayes_paired_wilcox_test and plot.bayes_two_sample_wilcox_test. Provides a plot of the generated posterior distributions.
model.code.bayes_paired_wilcox_test and model.code.bayes_two_sample_wilcox_test. Prints out code which can be used to run the model exactly as in the main function. This allows the test to be modified by the user.
model.code.bayes_paired_wilcox_test and model.code.bayes_two_sample_wilcox_test. Provides diagnostic information on the MCMC procedure, and includes trace plots of the modeled parameters.
This section uses the examples from R's built-in wilcox.test help file in order to display bayesWilcoxTest's functionality.
The paired-sample dataset is taken from @hollander1973nonparametric, p.29f and concerns a dataset of Hamilton depression scale factor measurements before and after therapy.
#Copied from the wilcox.test help file: # Hamilton depression scale factor measurements in 9 patients with # mixed anxiety and depression, taken at the first (x) and second # (y) visit after initiation of a therapy (administration of a # tranquilizer). x <- c(1.83, 0.50, 1.62, 2.48, 1.68, 1.88, 1.55, 3.06, 1.30) y <- c(0.878, 0.647, 0.598, 2.05, 1.06, 1.29, 1.06, 3.14, 1.29) wilcox.test(x, y, paired = TRUE, alternative = "greater")
The bayes.wilcox.test function can be used in exactly the same way as the classical function by prepending "bayes." to the original function call.
require(bayesWilcoxTest) hamiltonBayesWilcox <- bayes.wilcox.test(x, y, paired = TRUE, alternative = "greater") #Print out concise information on the test print(hamiltonBayesWilcox) #Show a more detailed summary summary(hamiltonBayesWilcox) #Visual Inspection of the posterior plot(hamiltonBayesWilcox) #MCMC diagnostics diagnostics.bayes_paired_wilcox_test(hamiltonBayesWilcox) #Obtain model code in order to modify as needed model.code.bayes_paired_wilcox_test(hamiltonBayesWilcox)
The example dataset for the independent-sample test contains data from @hollander1973nonparametric, p.69f on permeability of the human placental membrame.
#Copied from the wilcox.test help file: # Permeability constants of the human chorioamnion (a placental # membrane) at term (x) and between 12 to 26 weeks gestational # age (y). The alternative of interest is greater permeability # of the human chorioamnion for the term pregnancy. x <- c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64, 0.73, 1.46) y <- c(1.15, 0.88, 0.90, 0.74, 1.21) wilcox.test(x, y, alternative = "g")
Again, calling the original function while prepending "bayes." performs the Bayesian two-sample Wilcox:
membraneBayesWilcox <- bayes.wilcox.test(x, y, alternative = "g") #Print out concise information on the test print(membraneBayesWilcox) #Show a more detailed summary summary(membraneBayesWilcox) #Visual Inspection of the posterior plot(membraneBayesWilcox) #MCMC diagnostics diagnostics.bayes_two_sample_wilcox_test(membraneBayesWilcox) #Obtain model code in order to modify as needed model.code.bayes_two_sample_wilcox_test(membraneBayesWilcox)
The bayesWilcoxTest package adds a Bayesian alternative to the frequentist wilcox.test function to the BayesianFirstAid package [@baath2014bayesian]. The main function can be used identically to the wilcox.test function by prepending "bayes." to the function call.
As with other BEST-style Bayesian hypothesis tests, the Bayesian Wilcoxon allows a straightforward interpretation of posterior mass as the probability of the parameter value falling in a certain range. Compared to the frequentist tests, additional information is obtained: where a frequentist test that cannot reject the null hypothesis provides no further information on the hypotheses at hand, the Bayesian alternative gives an indication of which hypothesis can be considered more likely given the observed data.
Naturally, all caveats concerning Bayesian hypothesis testing also apply to the bayesWilcoxTest package. While the use of uninformative priors counters the common accusation of subjectivity in Bayesian methods, employing uniform ignorance priors makes the assumption that all possible parameter values are equally likely. @trafimow2005ubiquitous elaborates on problems with this "Laplacian" assumption.
The bayesWilcoxTest package implements a paired and independent sample test. In the built-in wilcox.test function, a one-sample test is available in addition to the paired and two sample tests. To find a Bayesian alternative model for this is not straightforward. One possibility would be to create an artificial second data vector with the same length as the sample and containing the mean to be tested for all elements. The difference between the two vectors could then be modeled as in the paired sample test. A possible problem with this procedure lies in the assignment of tied ranks for all elements of the second vector, which could be alleviated by adding an arbitrarily small random noise to each value. Still, since there would be practically no overlap between the two groups even if the null hypothesis is true, it remains to be tested whether this is a sensible procedure.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.