Description Usage Arguments Details
Compare models with k-fold cross validation
1 2 |
... |
one or more models on which to perform the cross validation |
k |
the k in k-fold. cross-validation will use k-1/k of the data for training. |
ntrials |
how many random partitions to make. Each partition will be one case in the output of the function |
error_type |
The kind of output to produce from each cross-validation. See |
The purpose of cross-validation is to provide "new" data on which to test a model's performance. In k-fold cross-validation, the data set used to train the model is broken into new training and testing data. This is accomplished simply by using most of the data for training while reserving the remaining data for evaluating the model: testing. Rather than training a single model, k models are trained, each with its own particular testing set. The testing sets in the k models are arranged to cover the whole of the data set. On each of the k testing sets, a performance output is calculated. Which output is most appropriate depends on the kind of model: regression model or classifier. The most basic measure is the mean square error: the difference between the actual response variable in the testing data and the output of the model when presented with inputs from the testing data. This is appropriate in many regression models.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.