Description Usage Arguments Details Value Author(s) References Examples
Performs Dunn's test of multiple comparisons using rank sums
1 2 3 4 5 
x 
a numeric vector, or a list of numeric vectors. Missing values are ignored. If the former, then groups must be specified using 
g 
a factor variable, numeric vector, or character vector indicating group. Missing values are ignored. 
method 
adjusts the pvalue for multiple comparisons using the Bonferroni, Šidák, Holm, HolmŠidák, Hochberg, BenjaminiHochberg, or BenjaminiYekutieli adjustment (see Details). The default is no adjustment for multiple comparisons. 
kw 
if TRUE then the results of the KruskalWallis test are reported. 
label 
if TRUE then the factor labels are used in the output table. 
wrap 
does not break up tables to maintain nicely formatted output. If FALSE then output of large tables is broken up across multiple pages. 
table 
outputs results of Dunn's test in a table format, as qualified by the 
list 
outputs results of Dunn's test in a list format. 
rmc 
if TRUE then the reported test statistics and table are based on row minus column, rather than the default column minus row (i.e. the signs of the test statistic are flipped). 
alpha 
the nominal level of significance used in the stepup/stepdown multiple comparisons procedures (Holm, HolmŠidák, Hochberg, BenjaminiHochberg, and BenjaminiYekutieli). 
altp 
if TRUE then express pvalues in alternative format. The default is to express pvalue = P(Z ≥ z), and reject Ho if p ≤ α/2. When the 
dunn.test
computes Dunn's test (1964) for stochastic dominance and reports the results among multiple pairwise comparisons after a KruskalWallis test for stochastic dominance among k groups (Kruskal and Wallis, 1952). The interpretation of stochastic dominance requires an assumption that the CDF of one group does not cross the CDF of the other. dunn.test
makes m = k(k1)/2 multiple pairwise comparisons based on Dunn's zteststatistic approximations to the actual rank statistics. The null hypothesis for each pairwise comparison is that the probability of observing a randomly selected value from the first group that is larger than a randomly selected value from the second group equals one half; this null hypothesis corresponds to that of the WilcoxonMannWhitney ranksum test. Like the ranksum test, if the data can be assumed to be continuous, and the distributions are assumed identical except for a difference in location, Dunn's test may be understood as a test for median difference. dunn.test
accounts for tied ranks.
dunn.test
outputs both zteststatistics for each pairwise comparison and the pvalue = P(Z ≥ z) for each. Reject Ho based on p ≤ α/2 (and in combination with pvalue ordering for stepwise method
options). If you prefer to work with pvalues expressed as pvalue = P(Z ≥ z) use the altp=TRUE
option, and reject Ho based on p ≤ α (and in combination with pvalue ordering for stepwise method
options). These are exactly equivalent rejection decisions.
Several options are available to adjust pvalues for multiple comparisons, including methods to control the familywise error rate (FWER) and methods to control the false discovery rate (FDR):
"none"
no adjustment is made. Those comparisons rejected without adjustment at the α level (twosided test) are starred in the output table, and starred in the list when using the list=TRUE
option.
"bonferroni"
the FWER is controlled using Dunn's (1961) Bonferroni adjustment, and adjusted pvalues = max(1, pm). Those
comparisons rejected with the Bonferroni adjustment at the α level (twosided test) are starred in the output table, and starred in the list when using the list=TRUE
option.
"sidak"
the FWER is controlled using Šidák's (1967) adjustment, and adjusted pvalues = max(1, 1  (1  p)^m). Those
comparisons rejected with the Šidák adjustment at the α level (twosided test) are starred in the output table, and starred in the list when using the list=TRUE
option.
"holm"
the FWER controlled using Holm's (1979) progressive stepup procedure to relax control on subsequent tests. p values are ordered from smallest to largest, and adjusted pvalues = max[1, p(m+1i)], where i indexes the ordering. All tests after and including the first test to not be rejected are also not rejected.
"hs"
the FWER is controlled using the HolmŠidák adjustment (Holm, 1979): another progressive stepup procedure but assuming dependence between tests. p values are ordered from smallest to largest, and adjusted pvalues = max[1, 1  (1  p)^(m+1i)], where i indexes the ordering. All tests after and including the first test to not be rejected are also not rejected.
"hochberg"
the FWER is controlled using Hochberg's (1988) progressive stepdown procedure to increase control on successive tests. p values are ordered from largest smallest, and adjusted pvalues = max[1, p*i], where i indexes the ordering. All tests after and including the first to be rejected are also rejected.
"bh"
the FDR is controlled using the BenjaminiHochberg adjustment (1995), a stepdown procedure appropriate to independent tests or tests that are positively dependent. pvalues are ordered from largest to smallest, and adjusted pvalues = max[1, pm/(m+1i)], where i indexes the ordering. All tests after and including the first to be rejected are also rejected.
"by"
the FDR is controlled using the BenjaminiYekutieli adjustment (2011), a stepdown procedure appropriate to depenent tests. pvalues are ordered from largest to smallest, and adjusted pvalues = max[1, pmC/(m+1i)], where i indexes the ordering, and the constant C = 1 + 1/2 + . . . + 1/m. All tests after and including the first to be rejected are also rejected.
Because the sequential stepup/stepdown tests rejection decisions depend on both the pvalues and their ordering, those tests rejected using "holm"
, "hs"
, "hochberg"
, "bh"
, or "by"
at the indicated α level are starred in the output table, and starred in the list when using the list=TRUE
option.
dunn.test
returns:
chi2 
a scalar of the KruskalWallis test statistic adjusted for ties. 
Z 
a vector of all m of Dunn z test statistics. 
P 
a vector of pvalues corresponding to 
altP 
a vector of pvalues corresponding to 
P.adjust 
a vector of pvalues corresponding to 
altP.adjust 
a vector of pvalues corresponding to 
comparisons 
a vector of strings labeling each pairwise comparison, as qualified by the 
Alexis Dinno (alexis.dinno@pdx.edu)
Benjamini, Y. and Hochberg, Y. (1995) Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society. Series B (Methodological). 57, 289–300.
Benjamini, Y. and Yekutieli, D. (2001) The control of the false discovery rate in multiple testing under dependency. Annals of Statistics. 29, 1165–1188.
Dunn, O. J. (1961) Multiple comparisons among means. Journal of the American Statistical Association. 56, 52–64.
Dunn, O. J. (1964) Multiple comparisons using rank sums. Technometrics. 6, 241–252.
Hochberg, Y. (1988) A sharper Bonferroni procedure for multiple tests of significance. Biometrika. 75, 800–802.
Holm, S. (1979) A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics. 6, 65–70.
Kruskal, W. H. and Wallis, A. (1952) Use of ranks in onecriterion variance analysis. Journal of the American Statistical Association. 47, 583–621.
Šidák, Z. (1967) Rectangular confidence regions for the means of multivariate normal distributions. Journal of the American Statistical Association. 62, 626–633.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29  ## Example cribbed and modified from the kruskal.test documentation
## Hollander & Wolfe (1973), 116.
## Mucociliary efficiency from the rate of removal of dust in normal
## subjects, subjects with obstructive airway disease, and subjects
## with asbestosis.
x < c(2.9, 3.0, 2.5, 2.6, 3.2) # normal subjects
y < c(3.8, 2.7, 4.0, 2.4) # with obstructive airway disease
z < c(2.8, 3.4, 3.7, 2.2, 2.0) # with asbestosis
dunn.test(x=list(x,y,z))
x < c(x, y, z)
g < factor(rep(1:3, c(5, 4, 5)),
labels = c("Normal",
"COPD",
"Asbestosis"))
dunn.test(x, g)
## Example based on home care data from Dunn (1964)
data(homecare)
attach(homecare)
dunn.test(occupation, eligibility, method="hs", list=TRUE)
## Air quality data set illustrates differences in different
## multiple comparisons adjustments
attach(airquality)
dunn.test(Ozone, Month, kw=FALSE, method="bonferroni")
dunn.test(Ozone, Month, kw=FALSE, method="hs")
dunn.test(Ozone, Month, kw=FALSE, method="bh")
detach(airquality)

KruskalWallis rank sum test
data: x and group
KruskalWallis chisquared = 0.7714, df = 2, pvalue = 0.68
Comparison of x by group
(No adjustment)
Col Mean
Row Mean  1 2
+
2  0.641426
 0.2606

3  0.226778 0.855235
 0.4103 0.1962
KruskalWallis rank sum test
data: x and g
KruskalWallis chisquared = 0.7714, df = 2, pvalue = 0.68
Comparison of x by g
(No adjustment)
Col Mean
Row Mean  Asbestos COPD
+
COPD  0.855235
 0.1962

Normal  0.226778 0.641426
 0.4103 0.2606
KruskalWallis rank sum test
data: occupation and eligibility
KruskalWallis chisquared = 4.2226, df = 2, pvalue = 0.12
Comparison of occupation by eligibility
(Holm<U+0160>id<U+00E1>k)
Col Mean
Row Mean  Eligible No respo
+
No respo  0.155968
 0.4380

Responsi  2.022198 1.441205
 0.0633 0.1439
List of pairwise comparisons: Z statistic (adjusted pvalue)

Eligible  No responsible person : 0.155968 (0.4380)
Eligible  Responsible person unable : 2.022198 (0.0633)
No responsible person  Responsible person unable : 1.441205 (0.1439)
Comparison of Ozone by Month
(Bonferroni)
Col Mean
Row Mean  5 6 7 8
+
6  0.925158
 1.0000

7  4.419470 2.244208
 0.0000 0.1241

8  4.132813 2.038635 0.286657
 0.0002 0.2074 1.0000

9  1.321202 0.002538 3.217199 2.922827
 0.9322 1.0000 0.0065 0.0173
Comparison of Ozone by Month
(Holm<U+0160>id<U+00E1>k)
Col Mean
Row Mean  5 6 7 8
+
6  0.925158
 0.4435

7  4.419470 2.244208
 0.0000* 0.0722

8  4.132813 2.038635 0.286657
 0.0002* 0.0995 0.6245

9  1.321202 0.002538 3.217199 2.922827
 0.3239 0.4990 0.0052* 0.0121*
Comparison of Ozone by Month
(BenjaminiHochberg)
Col Mean
Row Mean  5 6 7 8
+
6  0.925158
 0.2218

7  4.419470 2.244208
 0.0000* 0.0248*

8  4.132813 2.038635 0.286657
 0.0001* 0.0346 0.4302

9  1.321202 0.002538 3.217199 2.922827
 0.1332 0.4990 0.0022* 0.0043*
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.