View source: R/mermboost_functions.R
mermboost | R Documentation |
Gradient boosting for optimizing negative log-likelihoods as loss functions, where component-wise arbitrary base-learners, e.g., smoothing procedures, are utilized as additive base-learners. In addition, every iteration estimates random component via a maximum likelihood approach using the current fit.
mermboost(formula, data = list(), na.action = na.omit, weights = NULL,
offset = NULL, family = gaussian, control = boost_control(),
oobweights = NULL, baselearner = c("bbs", "bols", "btree", "bss", "bns"),
...)
formula |
a symbolic description of the model to be fit in the lme4-format including random effects. |
data |
a data frame containing the variables in the model. |
na.action |
a function which indicates what should happen when
the data contain |
weights |
(optional) a numeric vector of weights to be used in the fitting process. |
offset |
a numeric vector to be used as offset (optional). |
family |
!! This is in contrast to usual mboost -
"only" a |
control |
a list of parameters controlling the algorithm. For
more details see |
oobweights |
an additional vector of out-of-bag weights, which is
used for the out-of-bag risk (i.e., if |
baselearner |
a character specifying the component-wise base
learner to be used: |
... |
additional arguments passed to |
A (generalized) additive mixed model is fitted using a boosting algorithm based on component-wise base-learners. Additionally, a mixed model gets estimated in every iteration and added to the current fit.
The base-learners can either be specified via the formula
object or via
the baselearner
argument. The latter argument is the default base-learner
which is used for all variables in the formula, without explicit base-learner
specification (i.e., if the base-learners are explicitly specified in formula
,
the baselearner
argument will be ignored for this variable).
Of note, "bss"
and "bns"
are deprecated and only in the list for
backward compatibility.
Note that more base-learners (i.e., in addition to the ones provided
via baselearner
) can be specified in formula
. See
baselearners
for details.
The description of mboost
holds while some methods are newly implemented
like predict.mermboost
, plot.mer_cv
and mstop.mer_cv
. Only the former one requires
an further argument. Additionally, methods VarCorr.mermboost
and ranef.mermboost
are implemented specifically.
See glmermboost
for the same approach using additive models.
See mer_cvrisk
for a cluster-sensitive cross-validation.
data("Orthodont")
# are there cluster-constant covariates?
find_ccc(Orthodont, "Subject")
mod <- mermboost(distance ~ bbs(age, knots = 4) + bols(Sex) + (1 |Subject),
data = Orthodont, family = gaussian,
control = boost_control(mstop = 100))
# let mermboost do the cluster-sensitive cross-validation for you
norm_cv <- mer_cvrisk(mod, no_of_folds = 10)
opt_m <- mstop(norm_cv)
# fit model with optimal stopping iteration
mod_opt <- mermboost(distance ~ bbs(age, knots = 4) + bols(Sex) + (1 |Subject),
data = Orthodont, family = gaussian,
control = boost_control(mstop = opt_m))
# use the model as known from mboost
# in additional, there are some methods knwon from lme4
ranef(mod_opt)
VarCorr(mod_opt)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.