mlr_tuners_grid_search | R Documentation |
Subclass for grid search tuning.
The grid is constructed as a Cartesian product over discretized values per parameter, see paradox::generate_design_grid()
.
If the learner supports hotstarting, the grid is sorted by the hotstart parameter (see also mlr3::HotstartStack).
If not, the points of the grid are evaluated in a random order.
This Tuner can be instantiated with the associated sugar function tnr()
:
tnr("grid_search")
resolution
integer(1)
Resolution of the grid, see paradox::generate_design_grid()
.
param_resolutions
named integer()
Resolution per parameter, named by parameter ID, see paradox::generate_design_grid()
.
batch_size
integer(1)
Maximum number of points to try in a batch.
$optimize()
supports progress bars via the package progressr
combined with a bbotk::Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
In order to support general termination criteria and parallelization, we
evaluate points in a batch-fashion of size batch_size
. Larger batches mean
we can parallelize more, smaller batches imply a more fine-grained checking
of termination criteria. A batch contains of batch_size
times resampling$iters
jobs.
E.g., if you set a batch size of 10 points and do a 5-fold cross validation, you can
utilize up to 50 cores.
Parallelization is supported via package future (see mlr3::benchmark()
's
section on parallelization for more details).
All Tuners use a logger (as implemented in lgr) from package
bbotk.
Use lgr::get_logger("bbotk")
to access and control the logger.
This Tuner is based on bbotk::OptimizerBatchGridSearch which can be applied on any black box optimization problem. See also the documentation of bbotk.
There are several sections about hyperparameter optimization in the mlr3book.
The gallery features a collection of case studies and demos about optimization.
Use the Hyperband optimizer with different budget parameters.
mlr3tuning::Tuner
-> mlr3tuning::TunerBatch
-> mlr3tuning::TunerBatchFromOptimizerBatch
-> TunerBatchGridSearch
new()
Creates a new instance of this R6 class.
TunerBatchGridSearch$new()
clone()
The objects of this class are cloneable with this method.
TunerBatchGridSearch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Tuner:
Tuner
,
mlr_tuners
,
mlr_tuners_cmaes
,
mlr_tuners_design_points
,
mlr_tuners_gensa
,
mlr_tuners_internal
,
mlr_tuners_irace
,
mlr_tuners_nloptr
,
mlr_tuners_random_search
# Hyperparameter Optimization
# load learner and set search space
learner = lrn("classif.rpart",
cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)
# run hyperparameter tuning on the Palmer Penguins data set
instance = tune(
tuner = tnr("grid_search"),
task = tsk("penguins"),
learner = learner,
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
term_evals = 10
)
# best performing hyperparameter configuration
instance$result
# all evaluated hyperparameter configuration
as.data.table(instance$archive)
# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(tsk("penguins"))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.