hypothesis_test: (Pairwise) comparisons between predictions

View source: R/hypothesis_test.R

hypothesis_testR Documentation

(Pairwise) comparisons between predictions

Description

Function to test differences of adjusted predictions for statistical significance. This is usually called contrasts or (pairwise) comparisons.

Usage

hypothesis_test(model, ...)

## Default S3 method:
hypothesis_test(
  model,
  terms = NULL,
  test = "pairwise",
  equivalence = NULL,
  p_adjust = NULL,
  df = NULL,
  ci.lvl = 0.95,
  verbose = TRUE,
  ...
)

## S3 method for class 'ggeffects'
hypothesis_test(
  model,
  test = "pairwise",
  equivalence = NULL,
  p_adjust = NULL,
  df = NULL,
  verbose = TRUE,
  ...
)

Arguments

model

A fitted model object, or an object of class ggeffects.

...

Arguments passed down to data_grid() when creating the reference grid and to marginaleffects::predictions() resp. marginaleffects::slopes().

terms

Character vector with the names of the focal terms from model, for which contrasts or comparisons should be displayed. At least one term is required, maximum length is three terms. If the first focal term is numeric, contrasts or comparisons for the slopes of this numeric predictor are computed (possibly grouped by the levels of further categorical focal predictors).

test

Hypothesis to test. By default, pairwise-comparisons are conducted. See section Introduction into contrasts and pairwise comparisons.

equivalence

ROPE's lower and higher bounds. Should be "default" or a vector of length two (e.g., c(-0.1, 0.1)). If "default", bayestestR::rope_range() is used. Instead of using the equivalence argument, it is also possible to call the equivalence_test() method directly. This requires the parameters package to be loaded. When using equivalence_test(), two more columns with information about the ROPE coverage and decision on H0 are added. Furthermore, it is possible to plot() the results from equivalence_test(). See bayestestR::equivalence_test() resp. parameters::equivalence_test.lm() for details.

p_adjust

Character vector, if not NULL, indicates the method to adjust p-values. See stats::p.adjust() for details. Further possible adjustment methods are "tukey" or "sidak". Some caution is necessary when adjusting p-value for multiple comparisons. See also section P-value adjustment below.

df

Degrees of freedom that will be used to compute the p-values and confidence intervals. If NULL, degrees of freedom will be extracted from the model using insight::get_df() with type = "wald".

ci.lvl

Numeric, the level of the confidence intervals.

verbose

Toggle messages and warnings.

Value

A data frame containing predictions (e.g. for test = NULL), contrasts or pairwise comparisons of adjusted predictions or estimated marginal means.

Introduction into contrasts and pairwise comparisons

There are many ways to test contrasts or pairwise comparisons. A detailed introduction with many (visual) examples is shown in this vignette.

P-value adjustment for multiple comparisons

Note that p-value adjustment for methods supported by p.adjust() (see also p.adjust.methods), each row is considered as one set of comparisons, no matter which test was specified. That is, for instance, when hypothesis_test() returns eight rows of predictions (when test = NULL), and p_adjust = "bonferroni", the p-values are adjusted in the same way as if we had a test of pairwise comparisons (test = "pairwise") where eight rows of comparisons are returned. For methods "tukey" or "sidak", a rank adjustment is done based on the number of combinations of levels from the focal predictors in terms. Thus, the latter two methods may be useful for certain tests only, in particular pairwise comparisons.

See Also

There is also an equivalence_test() method in the parameters package (parameters::equivalence_test.lm()), which can be used to test contrasts or comparisons for practical equivalence. This method also has a plot() method, hence it is possible to do something like:

library(parameters)
ggpredict(model, focal_terms) |>
  equivalence_test() |>
  plot()

Examples

## Not run: 
if (requireNamespace("marginaleffects") && interactive()) {
  data(efc)
  efc$c172code <- as.factor(efc$c172code)
  efc$c161sex <- as.factor(efc$c161sex)
  levels(efc$c161sex) <- c("male", "female")
  m <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc)

  # direct computation of comparisons
  hypothesis_test(m, "c172code")

  # passing a `ggeffects` object
  pred <- ggpredict(m, "c172code")
  hypothesis_test(pred)

  # test for slope
  hypothesis_test(m, "c12hour")

  # interaction - contrasts by groups
  m <- lm(barthtot ~ c12hour + c161sex * c172code + neg_c_7, data = efc)
  hypothesis_test(m, c("c161sex", "c172code"), test = NULL)

  # interaction - pairwise comparisons by groups
  hypothesis_test(m, c("c161sex", "c172code"))

  # p-value adjustment
  hypothesis_test(m, c("c161sex", "c172code"), p_adjust = "tukey")

  # specific comparisons
  hypothesis_test(m, c("c161sex", "c172code"), test = "b2 = b1")

  # interaction - slope by groups
  m <- lm(barthtot ~ c12hour + neg_c_7 * c172code + c161sex, data = efc)
  hypothesis_test(m, c("neg_c_7", "c172code"))
}

## End(Not run)

strengejacke/ggeffects documentation built on March 18, 2023, 10:31 p.m.