Compare fitted models on LOO or WAIC
At least two objects returned by
When comparing two fitted models, we can estimate the difference in
their expected predictive accuracy by the difference in
elpd_loo (multiplied by -2, if desired, to be on the deviance
scale). To compute the standard error of this difference we can use a
paired estimate to take advantage of the fact that the same set of N
data points was used to fit both models. We think these calculations will
be most useful when N is large, because then non-normality of the
distribution is not such an issue when estimating the uncertainty in these
sums. These standard errors, for all their flaws, should give a better
sense of uncertainty than what is obtained using the current standard
approach of comparing differences of deviances to a Chi-squared
distribution, a practice derived for Gaussian linear models or
asymptotically, and which only applies to nested models in any case.
A vector or matrix with class
contains more than two objects then a matrix of summary information is
... contains exactly two objects then the difference in
expected predictive accuracy and the standard error of the difference are
returned (see Details). The difference will be positive if the
expected predictive accuracy for the second model is higher.
In previous versions of loo model weights were also reported by
compare. We have removed the weights because they were based only on
the point estimate of the elpd values ignoring the uncertainty. We are
currently working on something similar to these weights that also accounts
for uncertainty, which will be included in future versions of loo.
Vehtari, A., Gelman, A., and Gabry, J. (2016). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. http://arxiv.org/abs/1507.04544/ (preprint)
1 2 3 4 5 6 7 8 9 10
Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.