evaluate_game: Compare two models using a specified model selection criteria

View source: R/tournament.R

evaluate_gameR Documentation

Compare two models using a specified model selection criteria

Description

evaluate_game uses WAIC, DIC or the posterior probabilities of the models, calculated with Bayes factor, to determine whether one model is more appropriate than the other model given the data at hand.

Usage

evaluate_game(m, method, winning_criteria)

Arguments

m

a list of two model objects fit on the same dataset. The allowed model objects are "gplm", "gplm0", "plm" and "plm0"

method

a string specifying the method used to estimate the predictive performance of the models. The allowed methods are "WAIC", "DIC" and "Posterior_probability".

winning_criteria

a numerical value which sets the threshold which the first model in the list must exceed for it to be declared the more appropriate model. This value defaults to 2.2 for methods "WAIC" and "DIC", but defaults to 0.75 for method "Posterior_probability".

Value

A data.frame with the summary of the results of the game

References

Hrafnkelsson, B., Sigurdarson, H., and Gardarsson, S. M. (2022). Generalization of the power-law rating curve using hydrodynamic theory and Bayesian hierarchical modeling, Environmetrics, 33(2):e2711.

See Also

tournament


bdrc documentation built on March 31, 2023, 11:41 p.m.