ResampleResult | R Documentation |
resample()
This is the result container object returned by resample()
.
Note that all stored objects are accessed by reference. Do not modify any object without cloning it first.
ResampleResults can be visualized via mlr3viz's autoplot()
function.
as.data.table(rr, reassemble_learners = TRUE, convert_predictions = TRUE, predict_sets = "test")
ResampleResult -> data.table::data.table()
Returns a tabular view of the internal data.
c(...)
(ResampleResult, ...) -> BenchmarkResult
Combines multiple objects convertible to BenchmarkResult into a new BenchmarkResult.
task_type
(character(1)
)
Task type of objects in the ResampleResult
, e.g. "classif"
or "regr"
.
This is NA
for empty ResampleResults.
uhash
(character(1)
)
Unique hash for this object.
iters
(integer(1)
)
Number of resampling iterations stored in the ResampleResult
.
task
(Task)
The task resample()
operated on.
learner
(Learner)
Learner prototype resample()
operated on.
For a list of trained learners, see methods $learners()
.
resampling
(Resampling)
Instantiated Resampling object which stores the splits into training and test.
learners
(list of Learner)
List of trained learners, sorted by resampling iteration.
warnings
(data.table::data.table()
)
A table with all warning messages.
Column names are "iteration"
and "msg"
.
Note that there can be multiple rows per resampling iteration if multiple warnings have been recorded.
errors
(data.table::data.table()
)
A table with all error messages.
Column names are "iteration"
and "msg"
.
Note that there can be multiple rows per resampling iteration if multiple errors have been recorded.
new()
Creates a new instance of this R6 class.
An alternative construction method is provided by as_resample_result()
.
ResampleResult$new(data = ResultData$new(), view = NULL)
data
(ResultData | data.table()
)
An object of type ResultData, either extracted from another ResampleResult, another
BenchmarkResult, or manually constructed with as_result_data()
.
view
(character()
)
Single uhash
of the ResultData to operate on.
Used internally for optimizations.
format()
Helper for print outputs.
ResampleResult$format(...)
...
(ignored).
print()
Printer.
ResampleResult$print(...)
...
(ignored).
help()
Opens the corresponding help page referenced by field $man
.
ResampleResult$help()
prediction()
Combined Prediction of all individual resampling iterations, and all provided predict sets. Note that, per default, most performance measures do not operate on this object directly, but instead on the prediction objects from the resampling iterations separately, and then combine the performance scores with the aggregate function of the respective Measure (macro averaging).
If you calculate the performance on this prediction object directly, this is called micro averaging.
ResampleResult$prediction(predict_sets = "test")
predict_sets
(character()
)
Subset of {"train", "test"}
.
Prediction or empty list()
if no predictions are available.
predictions()
List of prediction objects, sorted by resampling iteration. If multiple sets are given, these are combined to a single one for each iteration.
If you evaluate the performance on all of the returned prediction objects and then average them, this
is called macro averaging. For micro averaging, operate on the combined prediction object as returned by
$prediction()
.
ResampleResult$predictions(predict_sets = "test")
predict_sets
(character()
)
Subset of {"train", "test", "internal_valid"}
.
List of Prediction objects, one per element in predict_sets
.
Or list of empty list()
s if no predictions are available.
score()
Returns a table with one row for each resampling iteration, including all involved objects:
Task, Learner, Resampling, iteration number (integer(1)
), and (if enabled)
one Prediction for each predict set of the Learner.
Additionally, a column with the individual (per resampling iteration) performance is added
for each Measure in measures
, named with the id of the respective measure id.
If measures
is NULL
, measures
defaults to the return value of default_measures()
.
ResampleResult$score( measures = NULL, ids = TRUE, conditions = FALSE, predictions = TRUE )
measures
(Measure | list of Measure)
Measure(s) to calculate.
ids
(logical(1)
)
If ids
is TRUE
, extra columns with the ids of objects ("task_id"
, "learner_id"
, "resampling_id"
)
are added to the returned table.
These allow to subset more conveniently.
conditions
(logical(1)
)
Adds condition messages ("warnings"
, "errors"
) as extra
list columns of character vectors to the returned table
predictions
(logical(1)
)
Additionally return prediction objects, one column for each predict_set
of the learner.
Columns are named "prediction_train"
, "prediction_test"
and "prediction_internal_valid"
,
if present.
data.table::data.table()
.
obs_loss()
Calculates the observation-wise loss via the loss function set in the
Measure's field obs_loss
.
Returns a data.table()
with the columns of the matching Prediction object plus
one additional numeric column for each measure, named with the respective measure id.
If there is no observation-wise loss function for the measure, the column is filled with
NA
values.
Note that some measures such as RMSE, do have an $obs_loss
, but they require an
additional transformation after aggregation, in this example taking the square-root.
ResampleResult$obs_loss(measures = NULL, predict_sets = "test")
measures
(Measure | list of Measure)
Measure(s) to calculate.
predict_sets
(character()
)
The predict sets.
aggregate()
Calculates and aggregates performance values for all provided measures, according to the
respective aggregation function in Measure.
If measures
is NULL
, measures
defaults to the return value of default_measures()
.
ResampleResult$aggregate(measures = NULL)
measures
(Measure | list of Measure)
Measure(s) to calculate.
Named numeric()
.
filter()
Subsets the ResampleResult, reducing it to only keep the iterations specified in iters
.
ResampleResult$filter(iters)
iters
(integer()
)
Resampling iterations to keep.
Returns the object itself, but modified by reference.
You need to explicitly $clone()
the object beforehand if you want to keeps
the object in its previous state.
discard()
Shrinks the ResampleResult by discarding parts of the internally stored data. Note that certain operations might stop work, e.g. extracting importance values from learners or calculating measures requiring the task's data.
ResampleResult$discard(backends = FALSE, models = FALSE)
backends
(logical(1)
)
If TRUE
, the DataBackend is removed from all stored Tasks.
models
(logical(1)
)
If TRUE
, the stored model is removed from all Learners.
Returns the object itself, but modified by reference.
You need to explicitly $clone()
the object beforehand if you want to keeps
the object in its previous state.
marshal()
Marshals all stored models.
ResampleResult$marshal(...)
...
(any)
Additional arguments passed to marshal_model()
.
unmarshal()
Unmarshals all stored models.
ResampleResult$unmarshal(...)
...
(any)
Additional arguments passed to unmarshal_model()
.
clone()
The objects of this class are cloneable with this method.
ResampleResult$clone(deep = FALSE)
deep
Whether to make a deep clone.
as_benchmark_result()
to convert to a BenchmarkResult.
Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter3/evaluation_and_benchmarking.html#sec-resampling
Package mlr3viz for some generic visualizations.
Other resample:
resample()
task = tsk("penguins")
learner = lrn("classif.rpart")
resampling = rsmp("cv", folds = 3)
rr = resample(task, learner, resampling)
print(rr)
# combined predictions and predictions for each fold separately
rr$prediction()
rr$predictions()
# folds scored separately, then aggregated (macro)
rr$aggregate(msr("classif.acc"))
# predictions first combined, then scored (micro)
rr$prediction()$score(msr("classif.acc"))
# check for warnings and errors
rr$warnings
rr$errors
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.