benchmark: Benchmark experiment for multiple learners and tasks.

Description Usage Arguments Value See Also Examples

Description

Complete benchmark experiment to compare different learning algorithms across one or more tasks w.r.t. a given resampling strategy. Experiments are paired, meaning always the same training / test sets are used for the different learners. Furthermore, you can of course pass “enhanced” learners via wrappers, e.g., a learner can be automatically tuned using makeTuneWrapper.

Usage

1
2
benchmark(learners, tasks, resamplings, measures, keep.pred = TRUE,
  models = TRUE, show.info = getMlrOption("show.info"))

Arguments

learners

[(list of) Learner | character]
Learning algorithms which should be compared, can also be a single learner. If you pass strings the learners will be created via makeLearner.

tasks

[(list of) Task]
Tasks that learners should be run on.

resamplings

[(list of) ResampleDesc | ResampleInstance]
Resampling strategy for each tasks. If only one is provided, it will be replicated to match the number of tasks. If missing, a 10-fold cross validation is used.

measures

[(list of) Measure]
Performance measures for all tasks. If missing, the default measure of the first task is used.

keep.pred

[logical(1)]
Keep the prediction data in the pred slot of the result object. If you do many experiments (on larger data sets) these objects might unnecessarily increase object size / mem usage, if you do not really need them. In this case you can set this argument to FALSE. Default is TRUE.

models

[logical(1)]
Should all fitted models be stored in the ResampleResult? Default is TRUE.

show.info

[logical(1)]
Print verbose output on console? Default is set via configureMlr.

Value

[BenchmarkResult].

See Also

Other benchmark: BenchmarkResult, batchmark, convertBMRToRankMatrix, friedmanPostHocTestBMR, friedmanTestBMR, generateCritDifferencesData, getBMRAggrPerformances, getBMRFeatSelResults, getBMRFilteredFeatures, getBMRLearnerIds, getBMRLearnerShortNames, getBMRLearners, getBMRMeasureIds, getBMRMeasures, getBMRModels, getBMRPerformances, getBMRPredictions, getBMRTaskDescs, getBMRTaskIds, getBMRTuneResults, plotBMRBoxplots, plotBMRRanksAsBarChart, plotBMRSummary, plotCritDifferences, reduceBatchmarkResults

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
lrns = list(makeLearner("classif.lda"), makeLearner("classif.rpart"))
tasks = list(iris.task, sonar.task)
rdesc = makeResampleDesc("CV", iters = 2L)
meas = list(acc, ber)
bmr = benchmark(lrns, tasks, rdesc, measures = meas)
rmat = convertBMRToRankMatrix(bmr)
print(rmat)
plotBMRSummary(bmr)
plotBMRBoxplots(bmr, ber, style = "violin")
plotBMRRanksAsBarChart(bmr, pos = "stack")
friedmanTestBMR(bmr)
friedmanPostHocTestBMR(bmr, p.value = 0.05)

shuodata/mlr-master documentation built on May 20, 2019, 3:33 p.m.