ComparisonResults-class: Class "ComparisonResults"

Description Objects from the Class Methods Author(s) References See Also Examples

Description

This is the main class that holds the results of performance estimation experiments involving several alternative workflows being applied and compared to several predictive tasks. For each workflow and task, a set of predictive performance metrics are estimated using some methodology and the results of this process are stored in these objets.

Objects from the Class

Objects can be created by calls of the form ComparisonResults(...). These object are essentially a list of lists of objects of class EstimationResults. The top level is named list with has as many components as there are tasks. For each task there will be a named sub-list containing as many components as there are alternative workflows. Each of these components will contain and object of class EstimationResults with the estimation results for the particular workflow / task combination.

Methods

plot

signature(x = "ComparisonResults", y = "missing"): plots the results of the experiments. It can result in an over-cluttered graph if too many workflows/tasks/evaluation metrics - use the subset method (see below) to overcome this.

show

signature(object = "ComparisonResults"): shows the contents of an object in a proper way

subset

signature(x = "ComparisonResults"): can be used to obtain a smaller ComparisonResults object containing only a subset of the information of the provided object. This method also accepts the arguments "tasks", "workflows" and "metrics". All are vectors of numbers or names that can be used to subset the original object. They default to all values of each dimension. See "methods?subset" for further details.

summary

signature(object = "ComparisonResults"): provides a summary of the performance estimation experiment.

Author(s)

Luis Torgo ltorgo@dcc.fc.up.pt

References

Torgo, L. (2014) An Infra-Structure for Performance Estimation and Experimental Comparison of Predictive Models in R. arXiv:1412.0436 [cs.MS] http://arxiv.org/abs/1412.0436

See Also

performanceEstimation, pairedComparisons, rankWorkflows, topPerformers, metricsSummary, mergeEstimationRes

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
showClass("ComparisonResults")
## Not run: 
## Estimating MAE, MSE, RMSE and MAPE for 3 variants of both
## regression trees and SVMs, on  two data sets, using one repetition
## of 10-fold CV
library(e1071)
library(DMwR)
data(swiss)
data(mtcars)

## running the estimation experiment
res <- performanceEstimation(
  c(PredTask(Infant.Mortality ~ .,swiss),PredTask(mpg ~ ., mtcars)),
  c(workflowVariants(learner="svm",
                     learner.pars=list(cost=c(1,10),gamma=c(0.01,0.5))),
    workflowVariants(learner="rpartXse",
                     learner.pars=list(se=c(0,0.5,1)))
  ),
  EstimationTask(metrics=c("mae","mse","rmse","mape"),method=CV())
  )

## Check a summary of the results
summary(res)

topPerformers(res)

summary(subset(res,metrics="mse"))
summary(subset(res,metrics="mse",partial=FALSE))
summary(subset(res,workflows="v1"))

## End(Not run)

performanceEstimation documentation built on May 2, 2019, 6:01 a.m.