View source: R/model.comparison.R
compare.model.xvals | R Documentation |
compare.model.xvals
compares the outputs of different BEDASSLE cross-validation analyses
compare.model.xvals(xval.files, mod.order, mod.names, mod.cols = NULL)
xval.files |
A |
mod.order |
An |
mod.names |
An |
mod.cols |
A vector containing the colors to be used in plotting the output of the different cross-validation analyses. Default is black. |
This function compares outputs of different BEDASSLE cross-validation analyses to determine the relative statistical support for each model and identify the best model.
This function compares the cross-validation predictive accuracy of different models
applied to the same data partitions (generated using makePartitions
).
Going in the order of model complexity specified by mod.order
, starting from the
least complex model, this function compares the performance of a model across all
cross-validation data partitions against that of the next most complex model. The model
that is determined to be "best" is the least complicated model whose performance is
statistically indistinguishable from adjacent model (ie, the model that is one rung
higher in complexity, as specified by mod.order
.
This function generates a figure comparing the cross-validation predictive accuracy of different models used to analyze the same data partitions. The "best" model among the set being compared is indicated with a golden arrow. The significance of the pairwise comparisons is indicated using letter groupings, as in a Tukey post-hoc test. The function also returns the name of the "best" model.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.