ResampleResult: Container for Results of 'resample()'

ResampleResultR Documentation

Container for Results of resample()

Description

This is the result container object returned by resample().

Note that all stored objects are accessed by reference. Do not modify any object without cloning it first.

ResampleResults can be visualized via mlr3viz's autoplot() function.

S3 Methods

  • as.data.table(rr, reassemble_learners = TRUE, convert_predictions = TRUE, predict_sets = "test")
    ResampleResult -> data.table::data.table()
    Returns a tabular view of the internal data.

  • c(...)
    (ResampleResult, ...) -> BenchmarkResult
    Combines multiple objects convertible to BenchmarkResult into a new BenchmarkResult.

Active bindings

task_type

(character(1))
Task type of objects in the ResampleResult, e.g. "classif" or "regr". This is NA for empty ResampleResults.

uhash

(character(1))
Unique hash for this object.

iters

(integer(1))
Number of resampling iterations stored in the ResampleResult.

task

(Task)
The task resample() operated on.

learner

(Learner)
Learner prototype resample() operated on. For a list of trained learners, see methods ⁠$learners()⁠.

resampling

(Resampling)
Instantiated Resampling object which stores the splits into training and test.

learners

(list of Learner)
List of trained learners, sorted by resampling iteration.

warnings

(data.table::data.table())
A table with all warning messages. Column names are "iteration" and "msg". Note that there can be multiple rows per resampling iteration if multiple warnings have been recorded.

errors

(data.table::data.table())
A table with all error messages. Column names are "iteration" and "msg". Note that there can be multiple rows per resampling iteration if multiple errors have been recorded.

Methods

Public methods


Method new()

Creates a new instance of this R6 class. An alternative construction method is provided by as_resample_result().

Usage
ResampleResult$new(data = ResultData$new(), view = NULL)
Arguments
data

(ResultData | data.table())
An object of type ResultData, either extracted from another ResampleResult, another BenchmarkResult, or manually constructed with as_result_data().

view

(character())
Single uhash of the ResultData to operate on. Used internally for optimizations.


Method format()

Helper for print outputs.

Usage
ResampleResult$format(...)
Arguments
...

(ignored).


Method print()

Printer.

Usage
ResampleResult$print(...)
Arguments
...

(ignored).


Method help()

Opens the corresponding help page referenced by field ⁠$man⁠.

Usage
ResampleResult$help()

Method prediction()

Combined Prediction of all individual resampling iterations, and all provided predict sets. Note that, per default, most performance measures do not operate on this object directly, but instead on the prediction objects from the resampling iterations separately, and then combine the performance scores with the aggregate function of the respective Measure (macro averaging).

If you calculate the performance on this prediction object directly, this is called micro averaging.

Usage
ResampleResult$prediction(predict_sets = "test")
Arguments
predict_sets

(character())
Subset of ⁠{"train", "test"}⁠.

Returns

Prediction or empty list() if no predictions are available.


Method predictions()

List of prediction objects, sorted by resampling iteration. If multiple sets are given, these are combined to a single one for each iteration.

If you evaluate the performance on all of the returned prediction objects and then average them, this is called macro averaging. For micro averaging, operate on the combined prediction object as returned by ⁠$prediction()⁠.

Usage
ResampleResult$predictions(predict_sets = "test")
Arguments
predict_sets

(character())
Subset of ⁠{"train", "test", "internal_valid"}⁠.

Returns

List of Prediction objects, one per element in predict_sets. Or list of empty list()s if no predictions are available.


Method score()

Returns a table with one row for each resampling iteration, including all involved objects: Task, Learner, Resampling, iteration number (integer(1)), and (if enabled) one Prediction for each predict set of the Learner. Additionally, a column with the individual (per resampling iteration) performance is added for each Measure in measures, named with the id of the respective measure id. If measures is NULL, measures defaults to the return value of default_measures().

Usage
ResampleResult$score(
  measures = NULL,
  ids = TRUE,
  conditions = FALSE,
  predictions = TRUE
)
Arguments
measures

(Measure | list of Measure)
Measure(s) to calculate.

ids

(logical(1))
If ids is TRUE, extra columns with the ids of objects ("task_id", "learner_id", "resampling_id") are added to the returned table. These allow to subset more conveniently.

conditions

(logical(1))
Adds condition messages ("warnings", "errors") as extra list columns of character vectors to the returned table

predictions

(logical(1))
Additionally return prediction objects, one column for each predict_set of the learner. Columns are named "prediction_train", "prediction_test" and "prediction_internal_valid", if present.

Returns

data.table::data.table().


Method obs_loss()

Calculates the observation-wise loss via the loss function set in the Measure's field obs_loss. Returns a data.table() with the columns of the matching Prediction object plus one additional numeric column for each measure, named with the respective measure id. If there is no observation-wise loss function for the measure, the column is filled with NA values. Note that some measures such as RMSE, do have an ⁠$obs_loss⁠, but they require an additional transformation after aggregation, in this example taking the square-root.

Usage
ResampleResult$obs_loss(measures = NULL, predict_sets = "test")
Arguments
measures

(Measure | list of Measure)
Measure(s) to calculate.

predict_sets

(character())
The predict sets.


Method aggregate()

Calculates and aggregates performance values for all provided measures, according to the respective aggregation function in Measure. If measures is NULL, measures defaults to the return value of default_measures().

Usage
ResampleResult$aggregate(measures = NULL)
Arguments
measures

(Measure | list of Measure)
Measure(s) to calculate.

Returns

Named numeric().


Method filter()

Subsets the ResampleResult, reducing it to only keep the iterations specified in iters.

Usage
ResampleResult$filter(iters)
Arguments
iters

(integer())
Resampling iterations to keep.

Returns

Returns the object itself, but modified by reference. You need to explicitly ⁠$clone()⁠ the object beforehand if you want to keeps the object in its previous state.


Method discard()

Shrinks the ResampleResult by discarding parts of the internally stored data. Note that certain operations might stop work, e.g. extracting importance values from learners or calculating measures requiring the task's data.

Usage
ResampleResult$discard(backends = FALSE, models = FALSE)
Arguments
backends

(logical(1))
If TRUE, the DataBackend is removed from all stored Tasks.

models

(logical(1))
If TRUE, the stored model is removed from all Learners.

Returns

Returns the object itself, but modified by reference. You need to explicitly ⁠$clone()⁠ the object beforehand if you want to keeps the object in its previous state.


Method marshal()

Marshals all stored models.

Usage
ResampleResult$marshal(...)
Arguments
...

(any)
Additional arguments passed to marshal_model().


Method unmarshal()

Unmarshals all stored models.

Usage
ResampleResult$unmarshal(...)
Arguments
...

(any)
Additional arguments passed to unmarshal_model().


Method clone()

The objects of this class are cloneable with this method.

Usage
ResampleResult$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other resample: resample()

Examples

task = tsk("penguins")
learner = lrn("classif.rpart")
resampling = rsmp("cv", folds = 3)
rr = resample(task, learner, resampling)
print(rr)

# combined predictions and predictions for each fold separately
rr$prediction()
rr$predictions()

# folds scored separately, then aggregated (macro)
rr$aggregate(msr("classif.acc"))

# predictions first combined, then scored (micro)
rr$prediction()$score(msr("classif.acc"))

# check for warnings and errors
rr$warnings
rr$errors

mlr3 documentation built on Oct. 18, 2024, 5:11 p.m.