Description Usage Arguments Value Author(s) References See Also Examples
Given a ComparisonResults
object this function provides a summary
statistic (defaulting to the mean) of the individual scores
obtained on a each evaluation metric over all repetitions carried
out in the estimation process. This is done for all workflows and
tasks of the performance estimation experiment. The function can be handy to
obtain things like for instance the maximum score obtained by each
workflow on a particular metric over all repetitions of the
experimental process. It is also usefull (using its defaults) as a way
to quickly getting the estimated values for each metric obtained by
each alternative workflow and task (see the Examples section).
1 | metricsSummary(compRes, summary = "mean", ...)
|
compRes |
An object of class |
summary |
A string with the name of the function that you want to use to obtain the summary (defaults to "mean"). This function will be applied to the set of individual scores of each workflow on each task and for all metrics. |
... |
Further arguments passed to the selected summary function. |
The result of this function is a named list with as many components as there are predictive tasks. For each task (component), we get a matrix with as many columns as there are workflows and as many rows as there are evaluation metrics. The values on this matrix are the results of applying the selected summary function to the metric scores on each iteration of the estimation process.
Luis Torgo ltorgo@dcc.fc.up.pt
Torgo, L. (2014) An Infra-Structure for Performance Estimation and Experimental Comparison of Predictive Models in R. arXiv:1412.0436 [cs.MS] http://arxiv.org/abs/1412.0436
performanceEstimation
,
topPerformers
,
topPerformer
,
rankWorkflows
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | ## Not run:
## Estimating several evaluation metrics on different variants of a
## regression tree and of a SVM, on two data sets, using one repetition
## of 10-fold CV
data(swiss)
data(mtcars)
library(e1071)
## run the experimental comparison
results <- performanceEstimation(
c(PredTask(Infant.Mortality ~ ., swiss),
PredTask(mpg ~ ., mtcars)),
c(workflowVariants(learner='svm',
learner.pars=list(cost=c(1,5),gamma=c(0.1,0.01))
)
),
EstimationTask(metrics=c("mse","mae"),method=CV(nReps=2,nFolds=5))
)
## Get the minium value of each metric on all iterations of the CV
## process.
metricsSummary(results,summary="min")
## Get a summary table for each task with the estimated scores for each
## metric by each workflow
metricsSummary(results)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.