Description Usage Arguments Value Parallelization Logging Note Examples
Runs a resampling (possibly in parallel).
1 |
task |
:: Task. |
learner |
:: Learner. |
resampling |
:: Resampling. |
store_models |
:: |
ResampleResult.
This function can be parallelized with the future package.
One job is one resampling iteration, and all jobs are send to an apply function
from future.apply in a single batch.
To select a parallel backend, use future::plan()
.
The mlr3 uses the lgr package for logging.
lgr supports multiple log levels which can be queried with
getOption("lgr.log_levels")
.
To suppress output and reduce verbosity, you can lower the log from the
default level "info"
to "warn"
:
1 | lgr::get_logger("mlr3")$set_threshold("warn")
|
To get additional log output for debugging, increase the log level to "debug"
or "trace"
:
1 | lgr::get_logger("mlr3")$set_threshold("debug")
|
To log to a file or a data base, see the documentation of lgr::lgr-package.
The fitted models are discarded after the predictions have been scored in order to reduce memory consumption.
If you need access to the models for later analysis, set store_models
to TRUE
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | task = tsk("iris")
learner = lrn("classif.rpart")
resampling = rsmp("cv")
# explicitly instantiate the resampling for this task for reproduciblity
set.seed(123)
resampling$instantiate(task)
rr = resample(task, learner, resampling)
print(rr)
# retrieve performance
rr$score(msr("classif.ce"))
rr$aggregate(msr("classif.ce"))
# merged prediction objects of all resampling iterations
pred = rr$prediction()
pred$confusion
# Repeat resampling with featureless learner
rr_featureless = resample(task, lrn("classif.featureless"), resampling)
# Convert results to BenchmarkResult, then combine them
bmr1 = as_benchmark_result(rr)
bmr2 = as_benchmark_result(rr_featureless)
print(bmr1$combine(bmr2))
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.