performanceEstimation: Estimate the predictive performance of modeling alternatives...

Description Usage Arguments Details Value Author(s) References See Also Examples

Description

This function can be used to estimate the predictive performance of alternative approaches to a set of predictive tasks, using different estimation methods. This is a generic function that should work with any modeling approaches provided a few assumptions are met. The function implements different estimation procedures, namely: cross validation, leave one out cross validation, hold-out, monte carlo simulations and bootstrap.

Usage

1
performanceEstimation(tasks,workflows,estTask,...)

Arguments

tasks

This is a vector of objects of class PredTask, containing the predictive tasks that will be used in the estimation procedure.

workflows

This is a vector of objects of class Workflow, containing the workflows representing different approaches to the predictive tasks, and whose performance we want to estimate.

estTask

This is an object belonging to class EstimationTask. It is used to specify the metrics to be estimated and the method to use to obtain these estimates. See section Details for the possible values.

...

Any further parameters that are to be passed to the lower-level functions implementing each individual estimation methodology.

Details

The goal of this function is to allow estimating the perfomance of a set of alternative modelling approaches on a set of predictive tasks. The estimation can be carried out using different methodologies. All alternative approaches (which we will refer to as workflows) will be applied using the same exact data partitions for each task thus ensuring the possibility of carrying out paired comparisons using adequate statistical tests for checking the significance of the observed differences in performance.

The first parameter of this function is a vector of PredTask objects that define the tasks to use in the estimation process.

The second argument is a vector of Workflow objects. These can be created in two different ways: either directly by calling the constructor of this class; or by using the workflowVariants function that can be used to automatically generate different workflow objects as variants of some base workflow. Either way there are two types of workflows: user-defined workflows and what we call "standard" workflows. The later are workflows that people typically follow to solve predictive tasks and that are already implemented in this package to facilitate the task of the user. These standard workflows are implemented in functions standardWF and timeseriesWF. When specifying the vector of workflows if you use (either in the constructor or in the function workflowVariants) the parameter wf to indicate which workflow you which to use. If you supply a name different from the two provided standard workflows the function will assume that this is a name of a function you have created to implement your own workflow (see the Examples section for illustrations). In case you omit the value of the wf parameter the function assumes you want to use one of the standard workflows and will try to "guess" which one. Namely, if you provide some value for the parameter type (either "slide" or "grow"), it will assume that you are addressing a time series task and thus will set wf to timeseriesWF. In all other cases will set it to standardWF. Summarizing, in terms of workflows you can use: i) your own user-defined workflows; ii) the standard workflow implemented by function standardWF; or iii) the standard time series workflow implementd by timeseriesWF.

Currently, the function allows for 5 different types of estimation methods to be used that are specified when you indicate the esitmation task. These are different methods for providing reliable estimates of the true value of the selected evaluation metrics. Both the metrics and the estimation method are defined through the value provided in argument estTask. The 5 estimation methodologies are the following:

Cross validation: this type of estimates can be obtained by providing in estTask argument an object of class EstimationTask with method set to an object of class CV (this is the default). More details on this type of method can be obtained in the help page of the class CV.

Leave one out cross validation: this type of estimates can be obtained by providing in estTask argument an object of class EstimationTask with method set to an object of class LOOCV. More details on this type of method can be obtained in the help page of the class LOOCV.

Hold out: this type of estimates can be obtained by providing in estTask argument an object of class EstimationTask with method set to an object of class Holdout. More details on this type of method can be obtained in the help page of the class Holdout.

Monte Carlo: this type of estimates can be obtained by providing in estTask argument an object of class EstimationTask with method set to an object of class MonteCarlo. More details on this type of method can be obtained in the help page of the class MonteCarlo.

Bootstrap: this type of estimates can be obtained by providing in estTask argument an object of class EstimationTask with method set to an object of class Bootstrap. More details on this type of method can be obtained in the help page of the class Bootstrap.

Value

The result of the function is an object of class ComparisonResults.

Author(s)

Luis Torgo ltorgo@dcc.fc.up.pt

References

Torgo, L. (2014) An Infra-Structure for Performance Estimation and Experimental Comparison of Predictive Models in R. arXiv:1412.0436 [cs.MS] http://arxiv.org/abs/1412.0436

See Also

workflowVariants, topPerformers, rankWorkflows, pairedComparisons, CV, LOOCV, Holdout, MonteCarlo, Bootstrap

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
## Not run: 
## Estimating MSE for 3 variants of both
## regression trees and SVMs, on  two data sets, using one repetition
## of 10-fold CV
library(e1071)
library(DMwR)
data(swiss)
data(mtcars)

## Estimating MSE using 10-fold CV for 4 variants of a standard workflow
## using an SVM as base learner and 3 variants of a regression tree. 
res <- performanceEstimation(
  c(PredTask(Infant.Mortality ~ .,swiss),PredTask(mpg ~ ., mtcars)),
  c(workflowVariants(learner="svm",
                     learner.pars=list(cost=c(1,10),gamma=c(0.01,0.5))),
    workflowVariants(learner="rpartXse",
                     learner.pars=list(se=c(0,0.5,1)))
  ),
  EstimationTask(metrics="mse")
  )

## Check a summary of the results
summary(res)

## best performers for each metric and task
topPerformers(res)


## Estimating the accuracy of a default SVM on IRIS using 10 repetitions
## of a 80%-20% Holdout
data(iris)
res1 <- performanceEstimation(PredTask(Species  ~ .,iris),
             Workflow(learner="svm"),
             EstimationTask(metrics="acc",method=Holdout(nReps=10,hldSz=0.2)))
summary(res1)

## Now an example with a user-defined workflow
myWF <- function(form,train,test,wL=0.5,...) {
    require(rpart,quietly=TRUE)
    ml <- lm(form,train)
    mr <- rpart(form,train)
    pl <- predict(ml,test)
    pr <- predict(mr,test)
    ps <- wL*pl+(1-wL)*pr
    list(trues=responseValues(form,test),preds=ps)
}
resmywf <- performanceEstimation(
             PredTask(mpg ~ ., mtcars),
             workflowVariants(wf="myWF",wL=seq(0,1,by=0.1)),
             EstimationTask(metrics="mae",method=Bootstrap(nReps=50))
           )
summary(resmywf)


## End(Not run)

performanceEstimation documentation built on May 2, 2019, 6:01 a.m.