View source: R/pairwise-comparisons.R
pairwise_comparisons | R Documentation |
Calculate parametric, non-parametric, robust, and Bayes Factor pairwise comparisons between group levels with corrections for multiple testing.
pairwise_comparisons(
data,
x,
y,
subject.id = NULL,
type = "parametric",
paired = FALSE,
var.equal = FALSE,
tr = 0.2,
bf.prior = 0.707,
p.adjust.method = "holm",
digits = 2L,
...
)
data |
A data frame (or a tibble) from which variables specified are to
be taken. Other data types (e.g., matrix,table, array, etc.) will not
be accepted. Additionally, grouped data frames from |
x |
The grouping (or independent) variable from |
y |
The response (or outcome or dependent) variable from |
subject.id |
Relevant in case of a repeated measures or within-subjects
design ( |
type |
A character specifying the type of statistical approach:
You can specify just the initial letter. |
paired |
Logical that decides whether the experimental design is
repeated measures/within-subjects or between-subjects. The default is
|
var.equal |
a logical variable indicating whether to treat the
two variances as being equal. If |
tr |
Trim level for the mean when carrying out |
bf.prior |
A number between |
p.adjust.method |
Adjustment method for p-values for multiple
comparisons. Possible methods are: |
digits |
Number of digits for rounding or significant figures. May also
be |
... |
Additional arguments passed to other methods. |
The returned tibble data frame can contain some or all of the following columns (the exact columns will depend on the statistical test):
statistic
: the numeric value of a statistic
df
: the numeric value of a parameter being modeled (often degrees
of freedom for the test)
df.error
and df
: relevant only if the statistic in question has
two degrees of freedom (e.g. anova)
p.value
: the two-sided p-value associated with the observed statistic
method
: the name of the inferential statistical test
estimate
: estimated value of the effect size
conf.low
: lower bound for the effect size estimate
conf.high
: upper bound for the effect size estimate
conf.level
: width of the confidence interval
conf.method
: method used to compute confidence interval
conf.distribution
: statistical distribution for the effect
effectsize
: the name of the effect size
n.obs
: number of observations
expression
: pre-formatted expression containing statistical details
For examples, see data frame output vignette.
The table below provides summary about:
statistical test carried out for inferential statistics
type of effect size estimate and a measure of uncertainty for this estimate
functions used internally to compute these details
Hypothesis testing
Type | Equal variance? | Test | p-value adjustment? | Function used |
Parametric | No | Games-Howell test | Yes | PMCMRplus::gamesHowellTest() |
Parametric | Yes | Student's t-test | Yes | stats::pairwise.t.test() |
Non-parametric | No | Dunn test | Yes | PMCMRplus::kwAllPairsDunnTest() |
Robust | No | Yuen's trimmed means test | Yes | WRS2::lincon() |
Bayesian | NA | Student's t-test | NA | BayesFactor::ttestBF() |
Effect size estimation
Not supported.
Hypothesis testing
Type | Test | p-value adjustment? | Function used |
Parametric | Student's t-test | Yes | stats::pairwise.t.test() |
Non-parametric | Durbin-Conover test | Yes | PMCMRplus::durbinAllPairsTest() |
Robust | Yuen's trimmed means test | Yes | WRS2::rmmcp() |
Bayesian | Student's t-test | NA | BayesFactor::ttestBF() |
Effect size estimation
Not supported.
For more, see: https://indrajeetpatil.github.io/ggstatsplot/articles/web_only/pairwise.html
# for reproducibility
set.seed(123)
library(statsExpressions)
#------------------- between-subjects design ----------------------------
# parametric
# if `var.equal = TRUE`, then Student's t-test will be run
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "parametric",
var.equal = TRUE,
paired = FALSE,
p.adjust.method = "none"
)
# if `var.equal = FALSE`, then Games-Howell test will be run
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "parametric",
var.equal = FALSE,
paired = FALSE,
p.adjust.method = "bonferroni"
)
# non-parametric (Dunn test)
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "nonparametric",
paired = FALSE,
p.adjust.method = "none"
)
# robust (Yuen's trimmed means *t*-test)
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "robust",
paired = FALSE,
p.adjust.method = "fdr"
)
# Bayes Factor (Student's *t*-test)
pairwise_comparisons(
data = mtcars,
x = cyl,
y = wt,
type = "bayes",
paired = FALSE
)
#------------------- within-subjects design ----------------------------
# parametric (Student's *t*-test)
pairwise_comparisons(
data = bugs_long,
x = condition,
y = desire,
subject.id = subject,
type = "parametric",
paired = TRUE,
p.adjust.method = "BH"
)
# non-parametric (Durbin-Conover test)
pairwise_comparisons(
data = bugs_long,
x = condition,
y = desire,
subject.id = subject,
type = "nonparametric",
paired = TRUE,
p.adjust.method = "BY"
)
# robust (Yuen's trimmed means t-test)
pairwise_comparisons(
data = bugs_long,
x = condition,
y = desire,
subject.id = subject,
type = "robust",
paired = TRUE,
p.adjust.method = "hommel"
)
# Bayes Factor (Student's *t*-test)
pairwise_comparisons(
data = bugs_long,
x = condition,
y = desire,
subject.id = subject,
type = "bayes",
paired = TRUE
)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.