pairwise_comparisons: Multiple pairwise comparison for one-way design

View source: R/pairwise_comparisons.R

pairwise_comparisonsR Documentation

Multiple pairwise comparison for one-way design

Description

Calculate parametric, non-parametric, robust, and Bayes Factor pairwise comparisons between group levels with corrections for multiple testing.

Usage

pairwise_comparisons(
  data,
  x,
  y,
  subject.id = NULL,
  type = "parametric",
  paired = FALSE,
  var.equal = FALSE,
  tr = 0.2,
  bf.prior = 0.707,
  p.adjust.method = "holm",
  k = 2L,
  ...
)

Arguments

data

A data frame (or a tibble) from which variables specified are to be taken. Other data types (e.g., matrix,table, array, etc.) will not be accepted. Additionally, grouped data frames from {dplyr} should be ungrouped before they are entered as data.

x

The grouping (or independent) variable from data. In case of a repeated measures or within-subjects design, if subject.id argument is not available or not explicitly specified, the function assumes that the data has already been sorted by such an id by the user and creates an internal identifier. So if your data is not sorted, the results can be inaccurate when there are more than two levels in x and there are NAs present. The data is expected to be sorted by user in subject-1,subject-2, ..., pattern.

y

The response (or outcome or dependent) variable from data.

subject.id

Relevant in case of a repeated measures or within-subjects design (paired = TRUE, i.e.), it specifies the subject or repeated measures identifier. Important: Note that if this argument is NULL (which is the default), the function assumes that the data has already been sorted by such an id by the user and creates an internal identifier. So if your data is not sorted and you leave this argument unspecified, the results can be inaccurate when there are more than two levels in x and there are NAs present.

type

A character specifying the type of statistical approach:

  • "parametric"

  • "nonparametric"

  • "robust"

  • "bayes"

You can specify just the initial letter.

paired

Logical that decides whether the experimental design is repeated measures/within-subjects or between-subjects. The default is FALSE.

var.equal

a logical variable indicating whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used.

tr

Trim level for the mean when carrying out robust tests. In case of an error, try reducing the value of tr, which is by default set to 0.2. Lowering the value might help.

bf.prior

A number between 0.5 and 2 (default 0.707), the prior width to use in calculating Bayes factors and posterior estimates. In addition to numeric arguments, several named values are also recognized: "medium", "wide", and "ultrawide", corresponding to r scale values of 1/2, sqrt(2)/2, and 1, respectively. In case of an ANOVA, this value corresponds to scale for fixed effects.

p.adjust.method

Adjustment method for p-values for multiple comparisons. Possible methods are: "holm" (default), "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", "none".

k

Number of digits after decimal point (should be an integer) (Default: k = 2L).

...

Additional arguments passed to other methods.

Value

The returned tibble data frame can contain some or all of the following columns (the exact columns will depend on the statistical test):

  • statistic: the numeric value of a statistic

  • df: the numeric value of a parameter being modeled (often degrees of freedom for the test)

  • df.error and df: relevant only if the statistic in question has two degrees of freedom (e.g. anova)

  • p.value: the two-sided p-value associated with the observed statistic

  • method: the name of the inferential statistical test

  • estimate: estimated value of the effect size

  • conf.low: lower bound for the effect size estimate

  • conf.high: upper bound for the effect size estimate

  • conf.level: width of the confidence interval

  • conf.method: method used to compute confidence interval

  • conf.distribution: statistical distribution for the effect

  • effectsize: the name of the effect size

  • n.obs: number of observations

  • expression: pre-formatted expression containing statistical details

For examples, see data frame output vignette.

Pairwise comparison tests

The table below provides summary about:

  • statistical test carried out for inferential statistics

  • type of effect size estimate and a measure of uncertainty for this estimate

  • functions used internally to compute these details

between-subjects

Hypothesis testing

Type Equal variance? Test p-value adjustment? Function used
Parametric No Games-Howell test Yes PMCMRplus::gamesHowellTest()
Parametric Yes Student's t-test Yes stats::pairwise.t.test()
Non-parametric No Dunn test Yes PMCMRplus::kwAllPairsDunnTest()
Robust No Yuen's trimmed means test Yes WRS2::lincon()
Bayesian NA Student's t-test NA BayesFactor::ttestBF()

Effect size estimation

Not supported.

within-subjects

Hypothesis testing

Type Test p-value adjustment? Function used
Parametric Student's t-test Yes stats::pairwise.t.test()
Non-parametric Durbin-Conover test Yes PMCMRplus::durbinAllPairsTest()
Robust Yuen's trimmed means test Yes WRS2::rmmcp()
Bayesian Student's t-test NA BayesFactor::ttestBF()

Effect size estimation

Not supported.

References

For more, see: https://indrajeetpatil.github.io/ggstatsplot/articles/web_only/pairwise.html

Examples


# for reproducibility
set.seed(123)
library(statsExpressions)

#------------------- between-subjects design ----------------------------

# parametric
# if `var.equal = TRUE`, then Student's t-test will be run
pairwise_comparisons(
  data            = mtcars,
  x               = cyl,
  y               = wt,
  type            = "parametric",
  var.equal       = TRUE,
  paired          = FALSE,
  p.adjust.method = "none"
)

# if `var.equal = FALSE`, then Games-Howell test will be run
pairwise_comparisons(
  data            = mtcars,
  x               = cyl,
  y               = wt,
  type            = "parametric",
  var.equal       = FALSE,
  paired          = FALSE,
  p.adjust.method = "bonferroni"
)

# non-parametric (Dunn test)
pairwise_comparisons(
  data            = mtcars,
  x               = cyl,
  y               = wt,
  type            = "nonparametric",
  paired          = FALSE,
  p.adjust.method = "none"
)

# robust (Yuen's trimmed means *t*-test)
pairwise_comparisons(
  data            = mtcars,
  x               = cyl,
  y               = wt,
  type            = "robust",
  paired          = FALSE,
  p.adjust.method = "fdr"
)

# Bayes Factor (Student's *t*-test)
pairwise_comparisons(
  data   = mtcars,
  x      = cyl,
  y      = wt,
  type   = "bayes",
  paired = FALSE
)

#------------------- within-subjects design ----------------------------

# parametric (Student's *t*-test)
pairwise_comparisons(
  data            = bugs_long,
  x               = condition,
  y               = desire,
  subject.id      = subject,
  type            = "parametric",
  paired          = TRUE,
  p.adjust.method = "BH"
)

# non-parametric (Durbin-Conover test)
pairwise_comparisons(
  data            = bugs_long,
  x               = condition,
  y               = desire,
  subject.id      = subject,
  type            = "nonparametric",
  paired          = TRUE,
  p.adjust.method = "BY"
)

# robust (Yuen's trimmed means t-test)
pairwise_comparisons(
  data            = bugs_long,
  x               = condition,
  y               = desire,
  subject.id      = subject,
  type            = "robust",
  paired          = TRUE,
  p.adjust.method = "hommel"
)

# Bayes Factor (Student's *t*-test)
pairwise_comparisons(
  data       = bugs_long,
  x          = condition,
  y          = desire,
  subject.id = subject,
  type       = "bayes",
  paired     = TRUE
)


statsExpressions documentation built on Sept. 12, 2023, 5:07 p.m.