Description Usage Arguments Details Value See Also Examples
View source: R/compare_models.R
compare_models
uses the 'loo' package
to compute LOO (leave-one-out cross-validation) or WAIC (widely applicable information criterion)
for 2 of more fit MixSIAR models.
1 | compare_models(x, loo = TRUE)
|
x |
list of two or more |
loo |
|
LOO and WAIC are "methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values". See Vehtari, Gelman, & Gabry (2017). In brief:
LOO and WAIC are preferred over AIC or DIC
LOO is more robust than WAIC
'loo'
estimates standard errors for the difference in LOO/WAIC between two models
We can calculate the relative support for each model using LOO/WAIC weights
Data frame with the following columns:
Model
: names of x
(input list)
LOOic
/ WAIC
: LOO information criterion or WAIC
se_LOOic
/ se_WAIC
: standard error of LOOic / WAIC
dLOOic
/ dWAIC
: difference between each model and the model with lowest LOOic/WAIC. Best model has dLOOic = 0.
se_dLOOic
/ se_dWAIC
: standard error of the difference between each model and the model with lowest LOOic/WAIC
weight
: relative support for each model, calculated as Akaike weights (p.75 Burnham & Anderson 2002). Interpretation: "an estimate of the probability that the model will make the best predictions on new data, conditional on the set of models considered" (McElreath 2015).
Vehtari, A, A Gelman, and J Gabry. 2017. Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing.
Pages 75-88 in Burnham, KP and DR Anderson. 2002. Model selection and multimodel inference: a practical information-theoretic approach. Springer Science & Business Media.
Pages 188-201 in McElreath, R. 2016. Statistical rethinking: a Bayesian course with examples in R and Stan. CRC Press.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | ## Not run:
# Model 1 = wolf diet by Region + Pack
mix.1 <- load_mix_data(filename=mix.filename,
iso_names=c("d13C","d15N"),
factors=c("Region","Pack"),
fac_random=c(TRUE,TRUE),
fac_nested=c(FALSE,TRUE),
cont_effects=NULL)
source.1 <- load_source_data(filename=source.filename, source_factors="Region",
conc_dep=FALSE, data_type="means", mix.1)
discr.1 <- load_discr_data(filename=discr.filename, mix.1)
# Run Model 1
jags.1 <- run_model(run="test", mix.1, source.1, discr.1, model_filename,
alpha.prior = 1, resid_err=T, process_err=T)
# Model 2 = wolf diet by Region (no Pack)
mix.2 <- load_mix_data(filename=mix.filename,
iso_names=c("d13C","d15N"),
factors=c("Region"),
fac_random=c(TRUE),
fac_nested=c(FALSE),
cont_effects=NULL)
source.2 <- load_source_data(filename=source.filename, source_factors="Region",
conc_dep=FALSE, data_type="means", mix.2)
discr.2 <- load_discr_data(filename=discr.filename, mix.2)
# Run Model 2
jags.2 <- run_model(run="test", mix.2, source.2, discr.2, model_filename,
alpha.prior = 1, resid_err=T, process_err=T)
# Compare models 1 and 2 using LOO
compare_models(x=list(jags.1, jags.2), loo=TRUE)
# Compare models 1 and 2 using WAIC
compare_models(x=list(jags.1, jags.2), loo=FALSE)
# Get WAIC for model 1
compare_models(x=list(jags.1), loo=FALSE)
# Create named list of rjags objects to get model names in summary
x <- list(jags.1, jags.2)
names(x) <- c("Region + Pack", "Region")
compare_models(x)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.