FSelectInstanceBatchMultiCrit | R Documentation |
The FSelectInstanceBatchMultiCrit specifies a feature selection problem for a FSelector.
The function fsi()
creates a FSelectInstanceBatchMultiCrit and the function fselect()
creates an instance internally.
There are several sections about feature selection in the mlr3book.
Learn about multi-objective optimization.
The gallery features a collection of case studies and demos about optimization.
For analyzing the feature selection results, it is recommended to pass the archive to as.data.table()
.
The returned data table is joined with the benchmark result which adds the mlr3::ResampleResult for each feature set.
The archive provides various getters (e.g. $learners()
) to ease the access.
All getters extract by position (i
) or unique hash (uhash
).
For a complete list of all getters see the methods section.
The benchmark result ($benchmark_result
) allows to score the feature sets again on a different measure.
Alternatively, measures can be supplied to as.data.table()
.
bbotk::OptimInstance
-> bbotk::OptimInstanceBatch
-> bbotk::OptimInstanceBatchMultiCrit
-> FSelectInstanceBatchMultiCrit
result_feature_set
(list of character()
)
Feature sets for task subsetting.
new()
Creates a new instance of this R6 class.
FSelectInstanceBatchMultiCrit$new( task, learner, resampling, measures, terminator, store_benchmark_result = TRUE, store_models = FALSE, check_values = FALSE, callbacks = NULL )
task
(mlr3::Task)
Task to operate on.
learner
(mlr3::Learner)
Learner to optimize the feature subset for.
resampling
(mlr3::Resampling)
Resampling that is used to evaluated the performance of the feature subsets.
Uninstantiated resamplings are instantiated during construction so that all feature subsets are evaluated on the same data splits.
Already instantiated resamplings are kept unchanged.
measures
(list of mlr3::Measure)
Measures to optimize.
If NULL
, mlr3's default measure is used.
terminator
(bbotk::Terminator)
Stop criterion of the feature selection.
store_benchmark_result
(logical(1)
)
Store benchmark result in archive?
store_models
(logical(1)
).
Store models in benchmark result?
check_values
(logical(1)
)
Check the parameters before the evaluation and the results for
validity?
callbacks
(list of CallbackBatchFSelect)
List of callbacks.
assign_result()
The FSelector object writes the best found feature subsets and estimated performance values here. For internal use.
FSelectInstanceBatchMultiCrit$assign_result(xdt, ydt, extra = NULL, ...)
xdt
(data.table::data.table()
)
x values as data.table
. Each row is one point. Contains the value in
the search space of the FSelectInstanceBatchMultiCrit object. Can contain
additional columns for extra information.
ydt
(data.table::data.table()
)
Optimal outcomes, e.g. the Pareto front.
extra
(data.table::data.table()
)
Additional information.
...
(any
)
ignored.
print()
Printer.
FSelectInstanceBatchMultiCrit$print(...)
...
(ignored).
clone()
The objects of this class are cloneable with this method.
FSelectInstanceBatchMultiCrit$clone(deep = FALSE)
deep
Whether to make a deep clone.
# Feature selection on Palmer Penguins data set
task = tsk("penguins")
# Construct feature selection instance
instance = fsi(
task = task,
learner = lrn("classif.rpart"),
resampling = rsmp("cv", folds = 3),
measures = msrs(c("classif.ce", "time_train")),
terminator = trm("evals", n_evals = 4)
)
# Choose optimization algorithm
fselector = fs("random_search", batch_size = 2)
# Run feature selection
fselector$optimize(instance)
# Optimal feature sets
instance$result_feature_set
# Inspect all evaluated sets
as.data.table(instance$archive)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.