Description Usage Arguments Details Value Note Author(s) References See Also Examples
Estimates mixed models with lme4 and calculates pvalues
for all fixed effects. The default method "KR"
(= KenwardRoger) as
well as method="S"
(Satterthwaite) support LMMs and estimate the
model with lmer
and then pass it to the
lmerTest
anova
method (or
Anova
). The other methods ("LRT"
=
likelihoodratio tests and "PB"
= parametric bootstrap) support both
LMMs (estimated via lmer
) and GLMMs (i.e., with
family
argument which invokes estimation via
glmer
) and estimate a full model and restricted models
in which the parameters corresponding to one effect (i.e., model term) are
withhold (i.e., fixed to 0). Per default tests are based on Type 3 sums of
squares. print
, nice
, anova
, and summary
methods for the returned object of class "mixed"
are available.
summary
invokes the default lme4 summary method and shows
parameters instead of effects.
lmer_alt
is simply a wrapper for mixed that only returns the
"lmerModLmerTest"
or "merMod"
object and correctly uses the

notation for removing correlations among factors. This function
otherwise behaves like g/lmer
(as for mixed
, it calls
glmer
as soon as a family
argument is present). Use
afex_options
("lmer_function")
to set which function
for estimation should be used. This option determines the class of the
returned object (i.e., "lmerModLmerTest"
or "merMod"
).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20  mixed(
formula,
data,
type = afex_options("type"),
method = afex_options("method_mixed"),
per_parameter = NULL,
args_test = list(),
test_intercept = FALSE,
check_contrasts = afex_options("check_contrasts"),
expand_re = FALSE,
all_fit = FALSE,
set_data_arg = afex_options("set_data_arg"),
progress = TRUE,
cl = NULL,
return = "mixed",
sig_symbols = afex_options("sig_symbols"),
...
)
lmer_alt(formula, data, check_contrasts = FALSE, ...)

formula 
a formula describing the full mixedmodel to be fitted. As
this formula is passed to 
data 

type 
type of test on which effects are based. Default is to use type 3
tests, taken from 
method 
character vector indicating which methods for obtaining
pvalues should be used: 
per_parameter 

args_test 

test_intercept 
logical. Whether or not the intercept should also be
fitted and tested for significance. Default is 
check_contrasts 

expand_re 
logical. Should random effects terms be expanded (i.e.,
factors transformed into numerical variables) before fitting with

all_fit 
logical. Should 
set_data_arg 

progress 
if 
cl 
A vector identifying a cluster; used for distributing the
estimation of the different models using several cores (if seveal models
are calculated). See examples. If 
return 
the default is to return an object of class 
sig_symbols 
Character. What should be the symbols designating
significance? When entering an vector with 
... 
further arguments (such as 
For an introduction to mixedmodeling for experimental designs see our chapter (Singmann & Kellen, in press) or Barr, Levy, Scheepers, & Tily (2013). Arguments for using the KenwardRoger approximation for obtaining pvalues are given by Judd, Westfall, and Kenny (2012). Further introductions to mixedmodeling for experimental designs are given by Baayen and colleagues (Baayen, 2008; Baayen, Davidson & Bates, 2008; Baayen & Milin, 2010). Specific recommendations on which random effects structure to specify for confirmatory tests can be found in Barr and colleagues (2013) and Barr (2013), but also see Bates et al. (2015).
When method = "KR"
(implemented via
KRmodcomp
), the KenwardRoger approximation for
degreesoffreedom is calculated using lmerTest
(if
test_intercept=FALSE
) or Anova
(if
test_intercept=TRUE
), which is only applicable to linearmixed models
(LMMs). The test statistic in the output is an Fvalue (F
). A similar
method that requires less RAM is method = "S"
which calculates the
Satterthwaite approximation for degreesoffreedom via
lmerTest
and is also only applicable to LMMs.
method = "KR"
or method = "S"
provide the best control for
Type 1 errors for LMMs (Luke, 2017).
method = "PB"
calculates pvalues using parametric bootstrap using
PBmodcomp
. This can be used for linear and also
generalized linear mixed models (GLMMs) by specifying a
family
argument to mixed
. Note that you should
specify further arguments to PBmodcomp
via args_test
,
especially nsim
(the number of simulations to form the reference
distribution) or cl
(for using multiple cores). For other arguments
see PBmodcomp
. Note that REML
(argument to
[g]lmer
) will be set to FALSE
if method is PB
.
method = "LRT"
calculates pvalues via likelihood ratio tests
implemented in the anova
method for "merMod"
objects. This is
the method recommended by Barr et al. (2013; which did not test the other
methods implemented here). Using likelihood ratio tests is only recommended
for models with many levels for the random effects (> 50), but can be pretty
helpful in case the other methods fail (due to memory and/or time
limitations). The
lme4 faq also
recommends the other methods over likelihood ratio tests.
For methods "KR"
and "S"
type 3 and 2 tests are implemented as
in Anova
.
For all other methods, type 3 tests are obtained by comparing a model in
which only the tested effect is excluded with the full model (containing all
effects). For method "nestedKR"
(which was the default in previous
versions) this corresponds to the (type 3) Wald tests given by
car::Anova
for "lmerMod"
models. The submodels in which the
tested effect is excluded are obtained by manually creating a model matrix
which is then fitted in "lme4"
.
Type 2 tests are truly sequential. They are obtained by comparing a model in
which the tested effect and all higher oder effect (e.g., all threeway
interactions for testing a twoway interaction) are excluded with a model in
which only effects up to the order of the tested effect are present and all
higher order effects absent. In other words, there are multiple full models,
one for each order of effects. Consequently, the results for lower order
effects are identical of whether or not higher order effects are part of the
model or not. This latter feature is not consistent with classical ANOVA
type 2 tests but a consequence of the sequential tests (and
I
didn't find a better way of implementing the Type 2 tests). This
does not correspond to the (type 2) Wald test reported by
car::Anova
.
If check_contrasts = TRUE
, contrasts will be set to
"contr.sum"
for all factors in the formula if default contrasts are
not equal to "contr.sum"
or attrib(factor, "contrasts") !=
"contr.sum"
. Furthermore, the current contrasts (obtained via
getOption("contrasts")
) will be set at the cluster nodes if cl
is not NULL
.
expand_re = TRUE
allows to expand
the random effects structure before passing it to lmer
. This allows
to disable estimation of correlation among random effects for random effects
term containing factors using the 
notation which may aid in
achieving model convergence (see Bates et al., 2015). This is achieved by
first creating a model matrix for each random effects term individually,
rename and append the so created columns to the data that will be fitted,
replace the actual random effects term with the so created variables
(concatenated with +), and then fit the model. The variables are renamed by
prepending all variables with rei (where i is the number of the random
effects term) and replacing ":" with "_by_".
lmer_alt
is simply a wrapper for mixed
that is intended to
behave like lmer
(or glmer
if a family
argument is
present), but also allows the use of 
with factors (by always using
expand_re = TRUE
). This means that lmer_alt
per default does
not enforce a specific contrast on factors and only returns the
"lmerModLmerTest"
or "merMod"
object without calculating any
additional models or pvalues (this is achieved by setting return =
"merMod"
). Note that it most likely differs from g/lmer
in how it
handles missing values so it is recommended to only pass data without
missing values to it!
One consequence of using expand_re = TRUE
is that the data that is
fitted will not be the same as the passed data.frame which can lead to
problems with e.g., the predict
method. However, the actual data used
for fitting is also returned as part of the mixed
object so can be
used from there. Note that the set_data_arg
can be used to change
whether the data
argument in the call to g/lmer
is set to
data
(the default) or the name of the data argument passed by the
user.
An object of class "mixed"
(i.e., a list) with the following
elements:
anova_table
a data.frame containing the statistics returned
from KRmodcomp
. The stat
column in this
data.frame gives the value of the test statistic, an Fvalue for
method = "KR"
and a chisquare value for the other two methods.
full_model
the "lmerModLmerTest"
or "merMod"
object returned from estimating the full model. Use
afex_options
("lmer_function")
for setting which
function for estimation should be used. The possible options are
"lmerTest"
(the default returning an object of class
"lmerModLmerTest"
) and "lme4"
returning an object of class
("merMod"
). Note that in case a family
argument is present
an object of class "glmerMod"
is always returned.
restricted_models
a list of "g/lmerMod"
(or
"lmerModLmerTest"
) objects from estimating the restricted models
(i.e., each model lacks the corresponding effect)
tests
a list of objects returned by the function for
obtaining the pvalues.
data
The data used for estimation (i.e., after excluding
missing rows and applying expand_re if requested).
call
The matched call.
It also has the following attributes, "type"
and "method"
. And
the attributes "all_fit_selected"
and "all_fit_logLik"
if
all_fit=TRUE
.
Two similar methods exist for objects of class "mixed"
: print
and anova
. They print a nice version of the anova_table
element
of the returned object (which is also invisibly returned). This methods omit
some columns and nicely round the other columns. The following columns are
always printed:
Effect
name of effect
p.value
estimated pvalue for the effect
For LMMs with method="KR"
or method="S"
the following further
columns are returned (note: the KenwardRoger correction does two separate
things: (1) it computes an effective number for the denominator df; (2) it
scales the statistic by a calculated amount, see also
https://stackoverflow.com/a/25612960/289572):
F
computed F statistic
ndf
numerator degrees of freedom (number of parameters used
for the effect)
ddf
denominator degrees of freedom (effective residual
degrees of freedom for testing the effect), computed from the
KenwardRoger correction using pbkrtest::KRmodcomp
F.scaling
scaling of Fstatistic computing from KenwardRoger
approximation (only printed if method="nestedKR"
)
For models with method="LRT"
the following further columns are
returned:
df.large
degrees of freedom (i.e., estimated paramaters) for
full model (i.e., model containing the corresponding effect)
df.small
degrees of freedom (i.e., estimated paramaters) for
restricted model (i.e., model without the corresponding effect)
chisq
2 times the difference in likelihood (obtained with
logLik
) between full and restricted model
df
difference in degrees of freedom between full and
restricted model (pvalue is based on these df).
For models with method="PB"
the following further column is returned:
stat
2 times the difference in likelihood (obtained with
logLik
) between full and restricted model (i.e., a chisquare
value).
Note that anova
can also be called with additional mixed and/or
merMod
objects. In this casethe full models are passed on to
anova.merMod
(with refit=FALSE
, which differs from the default
of anova.merMod
) which produces the known LRT tables.
The summary
method for objects of class mixed
simply calls
summary.merMod
on the full model.
If return = "merMod"
(or when invoking lmer_alt
), an object of
class "lmerModLmerTest"
or of class "merMod"
(depending on the
value of afex_options
("lmer_function")
), as returned
from g/lmer
, is returned. The default behavior is to return an object
of class "lmerModLmerTest"
estimated via lmer
.
When method = "KR"
, obtaining pvalues is known to crash due too
insufficient memory or other computational limitations (especially with
complex random effects structures). In these cases, the other methods
should be used. The RAM demand is a problem especially on 32 bit Windows
which only supports up to 2 or 3GB RAM (see
R Windows
FAQ). Then it is probably a good idea to use methods "S", "LRT", or "PB".
"mixed"
will throw a message if numerical variables are not centered
on 0, as main effects (of other variables then the numeric one) can be hard
to interpret if numerical variables appear in interactions. See Dalal &
Zickar (2012).
Per default mixed
uses lmer
, this can be
changed to lmer
by calling:
afex_options(lmer_function = "lme4")
Formulas longer than 500 characters will most likely fail due to the use of
deparse
.
Please report bugs or unexpected behavior by opening a guthub issue: https://github.com/singmann/afex/issues
Henrik Singmann with contributions from Ben Bolker and Joshua Wiley.
Baayen, R. H. (2008). Analyzing linguistic data: a practical introduction to statistics using R. Cambridge, UK; New York: Cambridge University Press.
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixedeffects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390412. doi: 10.1016/j.jml.2007.12.005
Baayen, R. H., & Milin, P. (2010). Analyzing Reaction Times. International Journal of Psychological Research, 3(2), 1228.
Barr, D. J. (2013). Random effects structure for testing interactions in linear mixedeffects models. Frontiers in Quantitative Psychology and Measurement, 328. doi: 10.3389/fpsyg.2013.00328
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3), 255278. doi: 10.1016/j.jml.2012.11.001
Bates, D., Kliegl, R., Vasishth, S., & Baayen, H. (2015). Parsimonious Mixed Models. arXiv:1506.04967 [stat]. Retrieved from https://arxiv.org/abs/1506.04967
Dalal, D. K., & Zickar, M. J. (2012). Some Common Myths About Centering Predictor Variables in Moderated Multiple Regression and Polynomial Regression. Organizational Research Methods, 15(3), 339362. doi: 10.1177/1094428111430540
Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 5469. doi: 10.1037/a0028347
Luke, S. (2017). Evaluating significance in linear mixedeffects models in R. Behavior Research Methods. doi: 10.3758/s134280160809y
Maxwell, S. E., & Delaney, H. D. (2004). Designing experiments and analyzing data: a modelcomparisons perspective. Mahwah, N.J.: Lawrence Erlbaum Associates.
aov_ez
and aov_car
for convenience
functions to analyze experimental desIgns with classical ANOVA or ANCOVA
wrapping Anova
.
see the following for the data sets from Maxwell and Delaney (2004) used
and more examples: md_15.1
, md_16.1
, and
md_16.4
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344  ##################################
## Simple Examples (from MEMSS) ##
##################################
if (requireNamespace("MEMSS")) {
data("Machines", package = "MEMSS")
# simple model with randomslopes for repeatedmeasures factor
m1 < mixed(score ~ Machine + (MachineWorker), data=Machines)
m1
# suppress correlations among random effect parameters with  and expand_re = TRUE
m2 < mixed(score ~ Machine + (MachineWorker), data=Machines, expand_re = TRUE)
m2
## compare:
summary(m1)$varcor
summary(m2)$varcor
# for wrong solution see:
# summary(lmer(score ~ Machine + (MachineWorker), data=Machines))$varcor
if (requireNamespace("emmeans")) {
# followup tests
library("emmeans") # package emmeans needs to be attached for followup tests.
(emm1 < emmeans(m1, "Machine"))
pairs(emm1, adjust = "holm") # all pairwise comparisons
con1 < list(
c1 = c(1, 0.5, 0.5), # 1 versus other 2
c2 = c(0.5, 1, 0.5) # 1 and 3 versus 2
)
contrast(emm1, con1, adjust = "holm")
if (requireNamespace("ggplot2")) {
# plotting
afex_plot(m1, "Machine") ## default uses modelbased CIs
## withinsubjects CIs somewhat more in line with pairwirse comparisons:
afex_plot(m1, "Machine", error = "within")
## less differences between CIs for model without correlations:
afex_plot(m2, "Machine")
afex_plot(m2, "Machine", error = "within")
}}}
## Not run:
#######################
### Further Options ###
#######################
## Multicore:
require(parallel)
(nc < detectCores()) # number of cores
cl < makeCluster(rep("localhost", nc)) # make cluster
# to keep track of what the function is doindg redirect output to outfile:
# cl < makeCluster(rep("localhost", nc), outfile = "cl.log.txt")
data("Machines", package = "MEMSS")
## There are two ways to use multicore:
# 1. Obtain fits with multicore (e.g. for likelihood ratio tests, LRT):
mixed(score ~ Machine + (MachineWorker), data=Machines, cl = cl,
method = "LRT")
# 2. Obtain PB samples via multicore:
mixed(score ~ Machine + (MachineWorker), data=Machines,
method = "PB", args_test = list(nsim = 50, cl = cl)) # better use 500 or 1000
## Both ways can be combined:
# 2. Obtain PB samples via multicore:
mixed(score ~ Machine + (MachineWorker), data=Machines, cl = cl,
method = "PB", args_test = list(nsim = 50, cl = cl))
#### use all_fit = TRUE and expand_re = TRUE:
data("sk2011.2") # data described in more detail below
sk2_aff < droplevels(sk2011.2[sk2011.2$what == "affirmation",])
require(optimx) # uses two more algorithms
sk2_aff_b < mixed(response ~ instruction*type+(inference*typeid), sk2_aff,
expand_re = TRUE, all_fit = TRUE, method = "LRT")
attr(sk2_aff_b, "all_fit_selected")
attr(sk2_aff_b, "all_fit_logLik")
# considerably faster with multicore:
clusterEvalQ(cl, library(optimx)) # need to load optimx in cluster
sk2_aff_b2 < mixed(response ~ instruction*type+(inference*typeid), sk2_aff,
expand_re = TRUE, all_fit = TRUE, cl=cl, method = "LRT")
attr(sk2_aff_b2, "all_fit_selected")
attr(sk2_aff_b2, "all_fit_logLik")
stopCluster(cl)
## End(Not run)
###################################################
## Replicating Maxwell & Delaney (2004) Examples ##
###################################################
## Not run:
### replicate results from Table 15.4 (Maxwell & Delaney, 2004, p. 789)
data(md_15.1)
# random intercept plus random slope
(t15.4a < mixed(iq ~ timecat + (1+timeid),data=md_15.1))
# to also replicate exact parameters use treatment.contrasts and the last level as base level:
contrasts(md_15.1$timecat) < contr.treatment(4, base = 4)
(t15.4b < mixed(iq ~ timecat + (1+timeid),data=md_15.1, check_contrasts=FALSE))
summary(t15.4a) # gives "wrong" parameters extimates
summary(t15.4b) # identical parameters estimates
# for more examples from chapter 15 see ?md_15.1
### replicate results from Table 16.3 (Maxwell & Delaney, 2004, p. 837)
data(md_16.1)
# original results need treatment contrasts:
(mixed1_orig < mixed(severity ~ sex + (1id), md_16.1, check_contrasts=FALSE))
summary(mixed1_orig$full_model)
# pvalue stays the same with afex default contrasts (contr.sum),
# but estimates and tvalues for the fixed effects parameters change.
(mixed1 < mixed(severity ~ sex + (1id), md_16.1))
summary(mixed1$full_model)
# data for next examples (Maxwell & Delaney, Table 16.4)
data(md_16.4)
str(md_16.4)
### replicate results from Table 16.6 (Maxwell & Delaney, 2004, p. 845)
# Note that (1room:cond) is needed because room is nested within cond.
# pvalue (almost) holds.
(mixed2 < mixed(induct ~ cond + (1room:cond), md_16.4))
# (differences are dut to the use of KenwardRoger approximation here,
# whereas M&W's pvalues are based on uncorrected df.)
# again, to obtain identical parameter and tvalues, use treatment contrasts:
summary(mixed2) # not identical
# prepare new data.frame with contrasts:
md_16.4b < within(md_16.4, cond < C(cond, contr.treatment, base = 2))
str(md_16.4b)
# pvalue stays identical:
(mixed2_orig < mixed(induct ~ cond + (1room:cond), md_16.4b,
check_contrasts=FALSE))
summary(mixed2_orig$full_model) # replicates parameters
### replicate results from Table 16.7 (Maxwell & Delaney, 2004, p. 851)
# Fvalues (almost) hold, pvalues (especially for skill) are off
(mixed3 < mixed(induct ~ cond + skill + (1room:cond), md_16.4))
# however, parameters are perfectly recovered when using the original contrasts:
mixed3_orig < mixed(induct ~ cond + skill + (1room:cond), md_16.4b,
check_contrasts=FALSE)
summary(mixed3_orig)
### replicate results from Table 16.10 (Maxwell & Delaney, 2004, p. 862)
# for this we need to center cog:
md_16.4b$cog < scale(md_16.4b$cog, scale=FALSE)
# Fvalues and pvalues are relatively off:
(mixed4 < mixed(induct ~ cond*cog + (cogroom:cond), md_16.4b))
# contrast has a relatively important influence on cog
(mixed4_orig < mixed(induct ~ cond*cog + (cogroom:cond), md_16.4b,
check_contrasts=FALSE))
# parameters are again almost perfectly recovered:
summary(mixed4_orig)
## End(Not run)
###########################
## Full Analysis Example ##
###########################
## Not run:
### splitplot experiment (Singmann & Klauer, 2011, Exp. 2)
## betweenfactor: instruction
## withinfactor: inference & type
## hypothesis: threeway interaction
data("sk2011.2")
# use only affirmation problems (S&K also splitted the data like this)
sk2_aff < droplevels(sk2011.2[sk2011.2$what == "affirmation",])
# set up model with maximal byparticipant random slopes
sk_m1 < mixed(response ~ instruction*inference*type+(inference*typeid), sk2_aff)
sk_m1 # prints ANOVA table with nicely rounded numbers (i.e., as characters)
nice(sk_m1) # returns the same but without printing potential warnings
anova(sk_m1) # returns and prints numeric ANOVA table (i.e., notrounded)
summary(sk_m1) # lmer summary of full model
# same model but using KenwardRoger approximation of df
# very similar results but slower
sk_m1b < mixed(response ~ instruction*inference*type+(inference*typeid),
sk2_aff, method="KR")
nice(sk_m1b)
# identical results as:
anova(sk_m1$full_model)
# suppressing correlation among random slopes: very similar results, but
# significantly faster and often less convergence warnings.
sk_m2 < mixed(response ~ instruction*inference*type+(inference*typeid), sk2_aff,
expand_re = TRUE)
sk_m2
## mixed objects can be passed to emmeans
library("emmeans") # however, package emmeans needs to be attached first
# emmeans also approximate df which takes time with default KenwardRoger
emm_options(lmer.df = "KenwardRoger") # default setting, slow
emm_options(lmer.df = "Satterthwaite") # faster setting, preferrable
emm_options(lmer.df = "asymptotic") # the fastest, df = infinity
# recreates basically Figure 4 (S&K, 2011, upper panel)
# only the 4th and 6th xaxis position are flipped
afex_plot(sk_m1, x = c("type", "inference"), trace = "instruction")
# set up reference grid for custom contrasts:
(rg1 < emmeans(sk_m1, c("instruction", "type", "inference")))
# set up contrasts on reference grid:
contr_sk2 < list(
ded_validity_effect = c(rep(0, 4), 1, rep(0, 5), 1, 0),
ind_validity_effect = c(rep(0, 5), 1, rep(0, 5), 1),
counter_MP = c(rep(0, 4), 1, 1, rep(0, 6)),
counter_AC = c(rep(0, 10), 1, 1)
)
# test the main double dissociation (see S&K, p. 268)
contrast(rg1, contr_sk2, adjust = "holm")
# all effects are significant.
## End(Not run)
####################
## Other Examples ##
####################
## Not run:
# use the obk.long data (not reasonable, no random slopes)
data(obk.long)
mixed(value ~ treatment * phase + (1id), obk.long)
# Examples for using the per.parameter argument
# note, require method = "nestedKR", "LRT", or "PB"
# also we use custom contrasts
data(obk.long, package = "afex")
obk.long$hour < ordered(obk.long$hour)
contrasts(obk.long$phase) < "contr.sum"
contrasts(obk.long$treatment) < "contr.sum"
# tests only the main effect parameters of hour individually per parameter.
mixed(value ~ treatment*phase*hour +(1id), per_parameter = "^hour$",
data = obk.long, method = "nestedKR", check_contrasts = FALSE)
# tests all parameters including hour individually
mixed(value ~ treatment*phase*hour +(1id), per_parameter = "hour",
data = obk.long, method = "nestedKR", check_contrasts = FALSE)
# tests all parameters individually
mixed(value ~ treatment*phase*hour +(1id), per_parameter = ".",
data = obk.long, method = "nestedKR", check_contrasts = FALSE)
# example data from package languageR: Lexical decision latencies elicited from
# 21 subjects for 79 English concrete nouns, with variables linked to subject or
# word.
data(lexdec, package = "languageR")
# using the simplest model
m1 < mixed(RT ~ Correct + Trial + PrevType * meanWeight +
Frequency + NativeLanguage * Length + (1Subject) + (1Word), data = lexdec)
m1
# Mixed Model Anova Table (Type 3 tests, Smethod)
#
# Model: RT ~ Correct + Trial + PrevType * meanWeight + Frequency + NativeLanguage *
# Model: Length + (1  Subject) + (1  Word)
# Data: lexdec
# Effect df F p.value
# 1 Correct 1, 1627.67 8.16 ** .004
# 2 Trial 1, 1591.92 7.58 ** .006
# 3 PrevType 1, 1605.05 0.17 .680
# 4 meanWeight 1, 74.37 14.85 *** <.001
# 5 Frequency 1, 75.06 56.54 *** <.001
# 6 NativeLanguage 1, 27.12 0.70 .412
# 7 Length 1, 74.80 8.70 ** .004
# 8 PrevType:meanWeight 1, 1600.79 6.19 * .013
# 9 NativeLanguage:Length 1, 1554.49 14.24 *** <.001
# 
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘+’ 0.1 ‘ ’ 1
# Fitting a GLMM using parametric bootstrap:
require("mlmRev") # for the data, see ?Contraception
gm1 < mixed(use ~ age + I(age^2) + urban + livch + (1  district), method = "PB",
family = binomial, data = Contraception, args_test = list(nsim = 10))
## note that nsim = 10 is way too low for all real examples!
## End(Not run)
## Not run:
#####################################
## Interplay with effects packages ##
#####################################
data("Machines", package = "MEMSS")
# simple model with randomslopes for repeatedmeasures factor
m1 < mixed(score ~ Machine + (MachineWorker), data=Machines,
set_data_arg = TRUE) ## necessary for it to work!
library("effects")
Effect("Machine", m1$full_model) # not correct:
# Machine effect
# Machine
# A B C
# 59.65000 52.35556 60.32222
# compare:
emmeans::emmeans(m1, "Machine")
# Machine emmean SE df asymp.LCL asymp.UCL
# A 52.35556 1.680711 Inf 49.06142 55.64969
# B 60.32222 3.528546 Inf 53.40640 67.23804
# C 66.27222 1.806273 Inf 62.73199 69.81245
## necessary to set contr.sum globally:
set_sum_contrasts()
Effect("Machine", m1$full_model)
# Machine effect
# Machine
# A B C
# 52.35556 60.32222 66.27222
plot(Effect("Machine", m1$full_model))
## End(Not run)

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.