compare.model.xvals: Compare the cross-validation output of different models

View source: R/model.comparison.R

compare.model.xvalsR Documentation

Compare the cross-validation output of different models

Description

compare.model.xvals compares the outputs of different BEDASSLE cross-validation analyses

Usage

compare.model.xvals(xval.files, mod.order, mod.names, mod.cols = NULL)

Arguments

xval.files

A character vector giving the filenames (in quotes, with the full file path) to the output files of a xValidation analysis.

mod.order

An integer vector giving the ordering of the model hypotheses to be compared. Lower numbers indicate less complex or more parsimonious models. No ties.

mod.names

An character vector giving the names associated with the models run in the different cross-validation analyses.

mod.cols

A vector containing the colors to be used in plotting the output of the different cross-validation analyses. Default is black.

Details

This function compares outputs of different BEDASSLE cross-validation analyses to determine the relative statistical support for each model and identify the best model.

This function compares the cross-validation predictive accuracy of different models applied to the same data partitions (generated using makePartitions). Going in the order of model complexity specified by mod.order, starting from the least complex model, this function compares the performance of a model across all cross-validation data partitions against that of the next most complex model. The model that is determined to be "best" is the least complicated model whose performance is statistically indistinguishable from adjacent model (ie, the model that is one rung higher in complexity, as specified by mod.order.

Value

This function generates a figure comparing the cross-validation predictive accuracy of different models used to analyze the same data partitions. The "best" model among the set being compared is indicated with a golden arrow. The significance of the pairwise comparisons is indicated using letter groupings, as in a Tukey post-hoc test. The function also returns the name of the "best" model.


gbradburd/bedassle documentation built on May 20, 2022, 1 p.m.