Description Usage Arguments Value
Tests model generated with random forest. Divide in k parts, train on k-1 and test on 1 part. Value of goodness is caluclated from all tests.
1 2 3 | k_cross_validation(k, data, target, predictor_names, percent_predictors,
num_trees, complex_param, min_split, min_bucket, max_depth,
isClasification, percent_obs)
|
k |
how many times test should be run |
data |
data to create tree |
target |
attribute to predict |
predictor_names |
attribute names used to predict target |
percent_predictors |
percent of user predictors per tree |
num_trees |
number of trees in forest |
complex_param |
cp param in rpart (complexity parameter. Any split that does not decrease the overall lack of fit by a factor of cp is not attempted. For instance, with anova splitting, this means that the overall R-squared must increase by cp at each step. The main role of this parameter is to save computing time by pruning off splits that are obviously not worthwhile. Essentially,the user informs the program that any split which does not improve the fit by cp will likely be pruned off by cross-validation, and that hence the program need not pursue it) |
min_split |
minsplit param in rpart (the minimum number of observations that must exist in a node in order for a split to be attempted) |
min_bucket |
minbucket param in rpart (the minimum number of observations in any terminal <leaf> node. If only one of minbucket or minsplit is specified, the code either sets minsplit to minbucket*3 or minbucket to minsplit/3, as appropriate) |
max_depth |
maxdepth param in (Set the maximum depth of any node of the final tree, with the root node counted as depth 0. Values greater than 30 rpart will give nonsense results on 32-bit machines) |
isClasification |
if target is class or distinct |
percent_obs |
percent of observations taken into account in each tree |
Percent of correctly predicted values
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.