Description Usage Arguments Details Value Author(s) References See Also Examples
This function obtains cross validation estimates of performance metrics for a given predictive task and method to solve it (i.e. a workflow). The function is general in the sense that the workflow function that the user provides as the solution to the task can implement or call whatever modeling technique the user wants.
The function implements N x K-fold cross validation (CV)
estimation. Different settings concerning this methodology are available
through the argument estTask
(check the help page of
EstimationTask
and CV
).
Please note that most of the times you will not call this function
directly (though there is nothing wrong in doing it) but instead you
will use the function performanceEstimation
, that allows you to
carry out performance estimation for multiple workflows on multiple tasks,
using the estimation method you want (e.g. cross validation). Still, when you
simply want to have the CV estimate of one workflow on one task,
you may prefer to use this function directly.
1 | cvEstimates(wf,task,estTask,cluster)
|
wf |
an object of the class |
task |
an object of the class |
estTask |
an object of the class |
cluster |
an optional parameter that can either be |
The idea of this function is to carry out a cross validation experiment with the goal of obtaining reliable estimates of the predictive performance of a certain approach to a predictive task. This approach (denoted here as a workflow) will be evaluated on the given predictive task using some user-selected metrics, and this function will provide k-fold cross validation estimates of the true values of these evaluation metrics. k-Fold cross validation estimates are obtained by randomly partitioning the given data set into k equal size sub-sets. Then a learn+test process is repeated k times. At each iteration one of the k partitions is left aside as test set and the model is obtained with a training set formed by the remaining k-1 partitions. The process is repeated leaving each time one of the k partitions aside as test set. In the end the average of the k scores obtained on each iteration is the cross validation estimate.
Parallel execution of the estimation experiment is only recommended for minimally large data sets otherwise you may actually increase the computation time due to communication costs between the processes.
The result of the function is an object of class EstimationResults
.
Luis Torgo ltorgo@dcc.fc.up.pt
Torgo, L. (2014) An Infra-Structure for Performance Estimation and Experimental Comparison of Predictive Models in R. arXiv:1412.0436 [cs.MS] http://arxiv.org/abs/1412.0436
CV
,
Workflow
,
standardWF
,
PredTask
,
EstimationTask
,
performanceEstimation
,
hldEstimates
,
bootEstimates
,
loocvEstimates
,
mcEstimates
,
EstimationResults
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | ## Not run:
## Estimating the mean squared error of svm on the swiss data,
## using two repetitions of 10-fold CV
library(e1071)
data(swiss)
## Now the evaluation
eval.res <- cvEstimates(
Workflow(wf="standardWF", wfID="mySVMtrial",
learner="svm", learner.pars=list(cost=10,gamma=0.1)
),
PredTask(Infant.Mortality ~ ., swiss),
EstimationTask(metrics="mse",method=CV(nReps=2,nFolds=10))
)
## Check a summary of the results
summary(eval.res)
## An example with a user-defined workflow function implementing a
## simple approach using linear regression models but also containing
## some data-preprocessing and well as results post-processing.
myLM <- function(form,train,test,k=10,.outModel=FALSE) {
require(DMwR)
## fill-in NAs on both the train and test sets
ntr <- knnImputation(train,k)
nts <- knnImputation(test,k,distData=train)
## obtain a linear regression model and simplify it
md <- lm(form,ntr)
md <- step(md)
## get the model predictions
p <- predict(md,nts)
## post-process the predictions (this is an example assuming the target
## variable is always positive so we change negative predictions into 0)
p <- ifelse(p < 0,0,p)
## now get the final return object
res <- list(trues=responseValues(form,nts), preds=p)
if (.outModel) res <- c(res,list(model=m))
res
}
## Now for the CV estimation
data(algae,package="DMwR")
eval.res2 <- cvEstimates(
Workflow(wf="myLM",k=5),
PredTask(a1 ~ ., algae[,1:12],"alga1"),
EstimationTask("mse",method=CV()))
## Check a summary of the results
summary(eval.res2)
##
## Parallel execution example
##
## Comparing the time of sequential and parallel execution
## using half of the cores of the local machine
##
data(Satellite,package="mlbench")
library(e1071)
system.time({p <- cvEstimates(Workflow(learner="svm"),
PredTask(classes ~ .,Satellite),
EstimationTask("err",Boot(nReps=10)),
cluster=TRUE)})[3]
system.time({np <- cvEstimates(Workflow(learner="svm"),
PredTask(classes ~ .,Satellite),
EstimationTask("err",Boot(nReps=10)))})[3]
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.