Description Objects from the Class Methods Author(s) References See Also Examples
This is the main class that holds the results of performance estimation experiments involving several alternative workflows being applied and compared to several predictive tasks. For each workflow and task, a set of predictive performance metrics are estimated using some methodology and the results of this process are stored in these objets.
Objects can be created by calls of the form
ComparisonResults(...)
. These object are essentially
a list of lists of objects of class
EstimationResults
. The top level is named list
with has as many components as there are tasks. For each task there
will be a named sub-list containing as many components as there are alternative workflows. Each
of these components will contain and object of class
EstimationResults
with the estimation results for
the particular workflow / task combination.
signature(x = "ComparisonResults", y = "missing")
: plots
the results of the experiments. It can result in an over-cluttered
graph if too many workflows/tasks/evaluation metrics - use the
subset method (see below) to overcome this.
signature(object = "ComparisonResults")
: shows the contents of an object in a proper way
signature(x = "ComparisonResults")
: can be used to obtain
a smaller ComparisonResults object containing only a subset of the information
of the provided object. This method also accepts the arguments "tasks",
"workflows" and "metrics". All are vectors of numbers or names
that can be used to subset the original object. They default to all values of each dimension. See "methods?subset" for further details.
signature(object = "ComparisonResults")
: provides a
summary of the performance estimation experiment.
Luis Torgo ltorgo@dcc.fc.up.pt
Torgo, L. (2014) An Infra-Structure for Performance Estimation and Experimental Comparison of Predictive Models in R. arXiv:1412.0436 [cs.MS] http://arxiv.org/abs/1412.0436
performanceEstimation
,
pairedComparisons
,
rankWorkflows
,
topPerformers
,
metricsSummary
,
mergeEstimationRes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | showClass("ComparisonResults")
## Not run:
## Estimating MAE, MSE, RMSE and MAPE for 3 variants of both
## regression trees and SVMs, on two data sets, using one repetition
## of 10-fold CV
library(e1071)
library(DMwR)
data(swiss)
data(mtcars)
## running the estimation experiment
res <- performanceEstimation(
c(PredTask(Infant.Mortality ~ .,swiss),PredTask(mpg ~ ., mtcars)),
c(workflowVariants(learner="svm",
learner.pars=list(cost=c(1,10),gamma=c(0.01,0.5))),
workflowVariants(learner="rpartXse",
learner.pars=list(se=c(0,0.5,1)))
),
EstimationTask(metrics=c("mae","mse","rmse","mape"),method=CV())
)
## Check a summary of the results
summary(res)
topPerformers(res)
summary(subset(res,metrics="mse"))
summary(subset(res,metrics="mse",partial=FALSE))
summary(subset(res,workflows="v1"))
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.