BenchmarkResult | R Documentation |
This is the result container object returned by benchmark()
.
A BenchmarkResult consists of the data of multiple ResampleResults.
The contents of a BenchmarkResult
and ResampleResult are almost identical and the stored ResampleResults can be extracted via the $resample_result(i)
method, where i is the index of the performed resample experiment.
This allows us to investigate the extracted ResampleResult and individual resampling iterations, as well as the predictions and models from each fold.
BenchmarkResults can be visualized via mlr3viz's autoplot()
function.
For statistical analysis of benchmark results and more advanced plots, see mlr3benchmark.
as.data.table(rr, ..., reassemble_learners = TRUE, convert_predictions = TRUE, predict_sets = "test", task_characteristics = FALSE)
BenchmarkResult -> data.table::data.table()
Returns a tabular view of the internal data.
c(...)
(BenchmarkResult, ...) -> BenchmarkResult
Combines multiple objects convertible to BenchmarkResult into a new BenchmarkResult.
task_type
(character(1)
)
Task type of objects in the BenchmarkResult
.
All stored objects (Task, Learner, Prediction) in a single BenchmarkResult
are
required to have the same task type, e.g., "classif"
or "regr"
.
This is NA
for empty BenchmarkResults.
tasks
(data.table::data.table()
)
Table of included Tasks with three columns:
"task_hash"
(character(1)
),
"task_id"
(character(1)
), and
"task"
(Task).
learners
(data.table::data.table()
)
Table of included Learners with three columns:
"learner_hash"
(character(1)
),
"learner_id"
(character(1)
), and
"learner"
(Learner).
Note that it is not feasible to access learned models via this field, as the training task would be ambiguous.
For this reason the returned learner are reset before they are returned.
Instead, select a row from the table returned by $score()
.
resamplings
(data.table::data.table()
)
Table of included Resamplings with three columns:
"resampling_hash"
(character(1)
),
"resampling_id"
(character(1)
), and
"resampling"
(Resampling).
resample_results
(data.table::data.table()
)
Returns a table with three columns:
uhash
(character()
).
resample_result
(ResampleResult).
n_resample_results
(integer(1)
)
Returns the total number of stored ResampleResults.
uhashes
(character()
)
Set of (unique) hashes of all included ResampleResults.
uhash_table
(data.table::data.table)
Table with columns uhash
, learner_id
, task_id
and resampling_id
.
new()
Creates a new instance of this R6 class.
BenchmarkResult$new(data = NULL)
data
(ResultData
)
An object of type ResultData
, either extracted from another ResampleResult, another
BenchmarkResult, or manually constructed with as_result_data()
.
help()
Opens the help page for this object.
BenchmarkResult$help()
format()
Helper for print outputs.
BenchmarkResult$format(...)
...
(ignored).
print()
Printer.
BenchmarkResult$print()
combine()
Fuses a second BenchmarkResult into itself, mutating the BenchmarkResult in-place.
If the second BenchmarkResult bmr
is NULL
, simply returns self
.
Note that you can alternatively use the combine function c()
which calls this method internally.
BenchmarkResult$combine(bmr)
bmr
(BenchmarkResult)
A second BenchmarkResult object.
Returns the object itself, but modified by reference.
You need to explicitly $clone()
the object beforehand if you want to keep
the object in its previous state.
marshal()
Marshals all stored models.
BenchmarkResult$marshal(...)
...
(any)
Additional arguments passed to marshal_model()
.
unmarshal()
Unmarshals all stored models.
BenchmarkResult$unmarshal(...)
...
(any)
Additional arguments passed to unmarshal_model()
.
score()
Returns a table with one row for each resampling iteration, including
all involved objects: Task, Learner, Resampling, iteration number
(integer(1)
), and Prediction. If ids
is set to TRUE
, character
column of extracted ids are added to the table for convenient
filtering: "task_id"
, "learner_id"
, and "resampling_id"
.
Additionally calculates the provided performance measures and binds the performance scores as extra columns. These columns are named using the id of the respective Measure.
BenchmarkResult$score( measures = NULL, ids = TRUE, conditions = FALSE, predictions = TRUE )
measures
(Measure | list of Measure)
Measure(s) to calculate.
ids
(logical(1)
)
Adds object ids ("task_id"
, "learner_id"
, "resampling_id"
) as
extra character columns to the returned table.
conditions
(logical(1)
)
Adds condition messages ("warnings"
, "errors"
) as extra
list columns of character vectors to the returned table
predictions
(logical(1)
)
Additionally return prediction objects, one column for each predict_set
of all learners combined.
Columns are named "prediction_train"
, "prediction_test"
and "prediction_internal_valid"
,
if present.
data.table::data.table()
.
obs_loss()
Calculates the observation-wise loss via the loss function set in the
Measure's field obs_loss
.
Returns a data.table()
with the columns row_ids
, truth
, response
and
one additional numeric column for each measure, named with the respective measure id.
If there is no observation-wise loss function for the measure, the column is filled with
NA
values.
Note that some measures such as RMSE, do have an $obs_loss
, but they require an
additional transformation after aggregation, in this example taking the square-root.
BenchmarkResult$obs_loss(measures = NULL, predict_sets = "test")
measures
(Measure | list of Measure)
Measure(s) to calculate.
predict_sets
(character()
)
The predict sets.
aggregate()
Returns a result table where resampling iterations are combined into ResampleResults. A column with the aggregated performance score is added for each Measure, named with the id of the respective measure.
The method for aggregation is controlled by the Measure, e.g. micro aggregation, macro aggregation or custom aggregation. Most measures default to macro aggregation.
Note that the aggregated performances just give a quick impression which approaches work well and which approaches are probably underperforming. However, the aggregates do not account for variance and cannot replace a statistical test. See mlr3viz to get a better impression via boxplots or mlr3benchmark for critical difference plots and significance tests.
For convenience, different flags can be set to extract more information from the returned ResampleResult.
BenchmarkResult$aggregate( measures = NULL, ids = TRUE, uhashes = FALSE, params = FALSE, conditions = FALSE )
measures
(Measure | list of Measure)
Measure(s) to calculate.
ids
(logical(1)
)
Adds object ids ("task_id"
, "learner_id"
, "resampling_id"
) as
extra character columns for convenient subsetting.
uhashes
(logical(1)
)
Adds the uhash values of the ResampleResult as extra character
column "uhash"
.
params
(logical(1)
)
Adds the hyperparameter values as extra list column "params"
. You
can unnest them with mlr3misc::unnest()
.
conditions
(logical(1)
)
Adds the number of resampling iterations with at least one warning as
extra integer column "warnings"
, and the number of resampling
iterations with errors as extra integer column "errors"
.
data.table::data.table()
.
filter()
Subsets the benchmark result.
You can either directly provide the row IDs or the uhashes of the resample results to keep,
or use the learner_ids
, task_ids
and resampling_ids
arguments to filter for learner, task and resampling IDs.
The three options are mutually exclusive.
BenchmarkResult$filter( i = NULL, uhashes = NULL, learner_ids = NULL, task_ids = NULL, resampling_ids = NULL )
i
(integer()
| NULL
)
The iteration values to filter for.
uhashes
(character()
| NULL
)
The uhashes of the resample results to filter for.
learner_ids
(character()
| NULL
)
The learner IDs to filter for.
task_ids
(character()
| NULL
)
The task IDs to filter for.
resampling_ids
(character()
| NULL
)
The resampling IDs to filter for.
Returns the object itself, but modified by reference.
You need to explicitly $clone()
the object beforehand if you want to keeps
the object in its previous state.
design = benchmark_grid( tsks(c("iris", "sonar")), lrns(c("classif.debug", "classif.featureless")), rsmp("holdout") ) bmr = benchmark(design) bmr bmr2 = bmr$clone(deep = TRUE) bmr2$filter(learner_ids = "classif.featureless") bmr2
resample_result()
Retrieve the i-th ResampleResult, by position, by unique hash uhash
or by learner,
task and resampling IDs.
All three options are mutually exclusive.
BenchmarkResult$resample_result( i = NULL, uhash = NULL, task_id = NULL, learner_id = NULL, resampling_id = NULL )
i
(integer(1)
| NULL
)
The iteration value to filter for.
uhash
(character(1)
| NULL
)
The unique identifier to filter for.
task_id
(character(1)
| NULL
)
The task ID to filter for.
learner_id
(character(1)
| NULL
)
The learner ID to filter for.
resampling_id
(character(1)
| NULL
)
The resampling ID to filter for.
ResampleResult.
design = benchmark_grid( tsk("iris"), lrns(c("classif.debug", "classif.featureless")), rsmp("holdout") ) bmr = benchmark(design) bmr$resample_result(learner_id = "classif.featureless") bmr$resample_result(i = 1) bmr$resample_result(uhash = uhashes(bmr, learner_id = "classif.debug"))
discard()
Shrinks the BenchmarkResult by discarding parts of the internally stored data. Note that certain operations might stop work, e.g. extracting importance values from learners or calculating measures requiring the task's data.
BenchmarkResult$discard(backends = FALSE, models = FALSE)
backends
(logical(1)
)
If TRUE
, the DataBackend is removed from all stored Tasks.
models
(logical(1)
)
If TRUE
, the stored model is removed from all Learners.
Returns the object itself, but modified by reference.
You need to explicitly $clone()
the object beforehand if you want to keeps
the object in its previous state.
set_threshold()
Sets the threshold for the response prediction of classification learners, given they have output a probability prediction for a binary classification task.
The resample results for which to change the threshold can either be specified directly
via uhashes
, by selecting the specific iterations (i
) or by filtering according to
learner, task and resampling IDs.
If none of the three options is specified, the threshold is set for all resample results.
BenchmarkResult$set_threshold( threshold, i = NULL, uhashes = NULL, learner_ids = NULL, task_ids = NULL, resampling_ids = NULL, ties_method = "random" )
threshold
(numeric(1)
)
Threshold value.
i
(integer()
| NULL
)
The iteration values to filter for.
uhashes
(character()
| NULL
)
The unique identifiers of the ResampleResults for which the threshold should be set.
learner_ids
(character()
| NULL
)
The learner IDs for which the threshold should be set.
task_ids
(character()
| NULL
)
The task IDs for which the threshold should be set.
resampling_ids
(character()
| NULL
)
The resampling IDs for which the threshold should be set.
ties_method
(character(1)
)
Method to handle ties in probabilities when selecting a class label.
Must be one of "random"
, "first"
or "last"
(corresponding to the same options in max.col()
).
"random"
: Randomly select one of the tied class labels (default).
"first"
: Select the first class label among tied values.
"last"
: Select the last class label among tied values.
design = benchmark_grid( tsk("sonar"), lrns(c("classif.debug", "classif.featureless"), predict_type = "prob"), rsmp("holdout") ) bmr = benchmark(design) bmr$set_threshold(0.8, learner_ids = "classif.featureless") bmr$set_threshold(0.3, i = 2) bmr$set_threshold(0.7, uhashes = uhashes(bmr, learner_ids = "classif.featureless"))
clone()
The objects of this class are cloneable with this method.
BenchmarkResult$clone(deep = FALSE)
deep
Whether to make a deep clone.
All stored objects are accessed by reference. Do not modify any extracted object without cloning it first.
Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter3/evaluation_and_benchmarking.html#sec-benchmarking
Package mlr3viz for some generic visualizations.
mlr3benchmark for post-hoc analysis of benchmark results.
Other benchmark:
benchmark()
,
benchmark_grid()
set.seed(123)
learners = list(
lrn("classif.featureless", predict_type = "prob"),
lrn("classif.rpart", predict_type = "prob")
)
design = benchmark_grid(
tasks = list(tsk("sonar"), tsk("penguins")),
learners = learners,
resamplings = rsmp("cv", folds = 3)
)
print(design)
bmr = benchmark(design)
print(bmr)
bmr$tasks
bmr$learners
# first 5 resampling iterations
head(as.data.table(bmr, measures = c("classif.acc", "classif.auc")), 5)
# aggregate results
bmr$aggregate()
# aggregate results with hyperparameters as separate columns
mlr3misc::unnest(bmr$aggregate(params = TRUE), "params")
# extract resample result for classif.rpart
rr = bmr$aggregate()[learner_id == "classif.rpart", resample_result][[1]]
print(rr)
# access the confusion matrix of the first resampling iteration
rr$predictions()[[1]]$confusion
# reduce to subset with task id "sonar"
bmr$filter(task_ids = "sonar")
print(bmr)
## ------------------------------------------------
## Method `BenchmarkResult$filter`
## ------------------------------------------------
design = benchmark_grid(
tsks(c("iris", "sonar")),
lrns(c("classif.debug", "classif.featureless")),
rsmp("holdout")
)
bmr = benchmark(design)
bmr
bmr2 = bmr$clone(deep = TRUE)
bmr2$filter(learner_ids = "classif.featureless")
bmr2
## ------------------------------------------------
## Method `BenchmarkResult$resample_result`
## ------------------------------------------------
design = benchmark_grid(
tsk("iris"),
lrns(c("classif.debug", "classif.featureless")),
rsmp("holdout")
)
bmr = benchmark(design)
bmr$resample_result(learner_id = "classif.featureless")
bmr$resample_result(i = 1)
bmr$resample_result(uhash = uhashes(bmr, learner_id = "classif.debug"))
## ------------------------------------------------
## Method `BenchmarkResult$set_threshold`
## ------------------------------------------------
design = benchmark_grid(
tsk("sonar"),
lrns(c("classif.debug", "classif.featureless"), predict_type = "prob"),
rsmp("holdout")
)
bmr = benchmark(design)
bmr$set_threshold(0.8, learner_ids = "classif.featureless")
bmr$set_threshold(0.3, i = 2)
bmr$set_threshold(0.7, uhashes = uhashes(bmr, learner_ids = "classif.featureless"))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.