compare: Model comparison

Description Usage Arguments Details Value Note References Examples

Description

Compare fitted models on LOO or WAIC

Usage

1

Arguments

...

At least two objects returned by loo or waic.

x

A list of at least two objects returned by loo or waic. This argument can be used as an alternative to specifying the models in ....

Details

When comparing two fitted models, we can estimate the difference in their expected predictive accuracy by the difference in elpd_waic or elpd_loo (multiplied by -2, if desired, to be on the deviance scale). To compute the standard error of this difference we can use a paired estimate to take advantage of the fact that the same set of N data points was used to fit both models. We think these calculations will be most useful when N is large, because then non-normality of the distribution is not such an issue when estimating the uncertainty in these sums. These standard errors, for all their flaws, should give a better sense of uncertainty than what is obtained using the current standard approach of comparing differences of deviances to a Chi-squared distribution, a practice derived for Gaussian linear models or asymptotically, and which only applies to nested models in any case.

Value

A vector or matrix with class 'compare.loo' that has its own print method. If exactly two objects are provided in ... or x, then the difference in expected predictive accuracy and the standard error of the difference are returned (see Details). The difference will be positive if the expected predictive accuracy for the second model is higher. If more than two objects are provided then a matrix of summary information is returned.

Note

In previous versions of loo model weights were also reported by compare. We have removed the weights because they were based only on the point estimate of the elpd values ignoring the uncertainty. We are currently working on something similar to these weights that also accounts for uncertainty, which will be included in future versions of loo.

References

Vehtari, A., Gelman, A., and Gabry, J. (2016a). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. Advance online publication. doi:10.1007/s11222-016-9696-4. arXiv preprint: http://arxiv.org/abs/1507.04544/

Vehtari, A., Gelman, A., and Gabry, J. (2016b). Pareto smoothed importance sampling. arXiv preprint: http://arxiv.org/abs/1507.02646/

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
## Not run: 
loo1 <- loo(log_lik1)
loo2 <- loo(log_lik2)
print(compare(loo1, loo2), digits = 3)

waic1 <- waic(log_lik1)
waic2 <- waic(log_lik2)
compare(waic1, waic2)

## End(Not run)


Search within the loo package
Search all R packages, documentation and source code

Questions? Problems? Suggestions? or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.