| benchmark | R Documentation |
Complete benchmark experiment to compare different learning algorithms across one or more tasks w.r.t. a given resampling strategy. Experiments are paired, meaning always the same training / test sets are used for the different learners. Furthermore, you can of course pass “enhanced” learners via wrappers, e.g., a learner can be automatically tuned using makeTuneWrapper.
benchmark(
learners,
tasks,
resamplings,
measures,
keep.pred = TRUE,
keep.extract = FALSE,
models = FALSE,
show.info = getMlrOption("show.info")
)
learners |
(list of Learner | character) |
tasks |
list of Task |
resamplings |
(list of ResampleDesc | ResampleInstance) |
measures |
(list of Measure) |
keep.pred |
( |
keep.extract |
( |
models |
( |
show.info |
( |
BenchmarkResult.
Other benchmark:
BenchmarkResult,
batchmark(),
convertBMRToRankMatrix(),
friedmanPostHocTestBMR(),
friedmanTestBMR(),
generateCritDifferencesData(),
getBMRAggrPerformances(),
getBMRFeatSelResults(),
getBMRFilteredFeatures(),
getBMRLearnerIds(),
getBMRLearnerShortNames(),
getBMRLearners(),
getBMRMeasureIds(),
getBMRMeasures(),
getBMRModels(),
getBMRPerformances(),
getBMRPredictions(),
getBMRTaskDescs(),
getBMRTaskIds(),
getBMRTuneResults(),
plotBMRBoxplots(),
plotBMRRanksAsBarChart(),
plotBMRSummary(),
plotCritDifferences(),
reduceBatchmarkResults()
lrns = list(makeLearner("classif.lda"), makeLearner("classif.rpart"))
tasks = list(iris.task, sonar.task)
rdesc = makeResampleDesc("CV", iters = 2L)
meas = list(acc, ber)
bmr = benchmark(lrns, tasks, rdesc, measures = meas)
rmat = convertBMRToRankMatrix(bmr)
print(rmat)
plotBMRSummary(bmr)
plotBMRBoxplots(bmr, ber, style = "violin")
plotBMRRanksAsBarChart(bmr, pos = "stack")
friedmanTestBMR(bmr)
friedmanPostHocTestBMR(bmr, p.value = 0.05)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.