View source: R/compare_performance.R
compare_performance | R Documentation |
compare_performance()
computes indices of model
performance for different models at once and hence allows comparison of
indices across models.
compare_performance(
...,
metrics = "all",
rank = FALSE,
estimator = "ML",
verbose = TRUE
)
... |
Multiple model objects (also of different classes). |
metrics |
Can be |
rank |
Logical, if |
estimator |
Only for linear models. Corresponds to the different
estimators for the standard deviation of the errors. If |
verbose |
Toggle warnings. |
When information criteria (IC) are requested in metrics
(i.e., any of "all"
,
"common"
, "AIC"
, "AICc"
, "BIC"
, "WAIC"
, or "LOOIC"
), model
weights based on these criteria are also computed. For all IC except LOOIC,
weights are computed as w = exp(-0.5 * delta_ic) / sum(exp(-0.5 * delta_ic))
,
where delta_ic
is the difference between the model's IC value and the
smallest IC value in the model set (Burnham and Anderson, 2002).
For LOOIC, weights are computed as "stacking weights" using
loo::stacking_weights()
.
When rank = TRUE
, a new column Performance_Score
is returned.
This score ranges from 0\
performance. Note that all score value do not necessarily sum up to 100\
Rather, calculation is based on normalizing all indices (i.e. rescaling
them to a range from 0 to 1), and taking the mean value of all indices for
each model. This is a rather quick heuristic, but might be helpful as
exploratory index.
In particular when models are of different types (e.g. mixed models,
classical linear models, logistic regression, ...), not all indices will be
computed for each model. In case where an index can't be calculated for a
specific model type, this model gets an NA
value. All indices that
have any NA
s are excluded from calculating the performance score.
There is a plot()
-method for compare_performance()
,
which creates a "spiderweb" plot, where the different indices are
normalized and larger values indicate better model performance.
Hence, points closer to the center indicate worse fit indices
(see online-documentation
for more details).
By default, estimator = "ML"
, which means that values from information
criteria (AIC, AICc, BIC) for specific model classes (like models from lme4)
are based on the ML-estimator, while the default behaviour of AIC()
for
such classes is setting REML = TRUE
. This default is intentional, because
comparing information criteria based on REML fits is usually not valid
(it might be useful, though, if all models share the same fixed effects -
however, this is usually not the case for nested models, which is a
prerequisite for the LRT). Set estimator = "REML"
explicitly return the
same (AIC/...) values as from the defaults in AIC.merMod()
.
A data frame with one row per model and one column per "index" (see
metrics
).
There is also a plot()
-method implemented in the see-package.
Burnham, K. P., and Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach (2nd ed.). Springer-Verlag. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/b97636")}
data(iris)
lm1 <- lm(Sepal.Length ~ Species, data = iris)
lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris)
lm3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris)
compare_performance(lm1, lm2, lm3)
compare_performance(lm1, lm2, lm3, rank = TRUE)
m1 <- lm(mpg ~ wt + cyl, data = mtcars)
m2 <- glm(vs ~ wt + mpg, data = mtcars, family = "binomial")
m3 <- lme4::lmer(Petal.Length ~ Sepal.Length + (1 | Species), data = iris)
compare_performance(m1, m2, m3)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.