modelTest.merMod: estimate detailed results per variable and effect sizes for...

Description Usage Arguments Details Examples

Description

This function extends the current drop1 method for merMod class objects from the lme4 package. Where the default method to be able to drop both fixed and random effects at once.

Usage

1
2
## S3 method for class 'merMod'
modelTest(object, method = c("Wald", "profile", "boot"), control, ...)

Arguments

object

A merMod class object, the fitted result of lmer.

method

A character vector indicating the types of confidence intervals to calculate. One of “Wald”, “profile”, or “boot”.

control

A lmerControl() results used to control how models are estimated when updating.

...

Additional arguments passed to confint

Details

At the moment, the function is aimed to lmer models and has very few features for glmer or nlmer models. The primary motivation was to provide a way to provide an overall test of whether a variable “matters”. In multilevel data, a variable may be included in both the fixed and random effects. To provide an overall test of whether it matters requires jointly testing the fixed and random effects. This also is needed to provide an overall effect size.

The function works by generating a formula with one specific variable or “term” removed at all levels. A model is then fit on this reduced formula and compared to the full model passed in. This is a complex operation for mixed effects models for several reasons. Firstly, R has no default mechanism for dropping terms from both the fixed and random portions. Secondly, mixed effects models do not accomodate all types of models. For example, if a model includes only a random slope with no random intercept, if the random slope was dropped, there would be no more random effects, and at that point, lmer or glmer will not run the model. It is theoretically possible to instead fit the model using lm or glm but this becomes more complex for certain model comparisons and calculations and is not currently implemented. Marginal and conditional R2 values are calculated for each term, and these are used also to calculate something akin to an f-squared effect size.

This is a new function and it is important to carefully evaluate the results and check that they are accurate and that they are sensible. Check accuracy by viewing the model formulae for each reduced model and checking that those are indeed accurate. In terms of checking whether a result is sensible or not, there is a large literature on the difficulty interpretting main effect tests in the presence of interactions. As it is challenging to detect all interactions, especially ones that are made outside of R formulae, all terms are tested. However, it likely does not make sense to report results from dropping a main effect but keeping the interaction term, so present and interpret these with caution.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
## these examples are slow to run
library(JWileymisc)
m1 <- lme4::lmer(extra ~ group + (1 | ID),
data = sleep, REML=FALSE)
modelTest(m1)


data(aces_daily, package = "JWileymisc")

strictControl <- lme4::lmerControl(optCtrl = list(
   algorithm = "NLOPT_LN_NELDERMEAD",
   xtol_abs = 1e-10,
   ftol_abs = 1e-10))

m1 <- lme4::lmer(NegAff ~ STRESS + (1 + STRESS | UserID),
  data = aces_daily,
  control = strictControl)
modelTest(m1, method = "profile")

m2 <- lme4::lmer(NegAff ~ STRESS + I(STRESS^2) + (1 + STRESS | UserID),
  data = aces_daily, control = strictControl)

## might normally use more bootstraps but keeping low for faster run
modelTest(m2, method = "boot", nsim = 100)

multilevelTools documentation built on March 13, 2020, 2:07 a.m.