mlr_tuners_grid_search: Hyperparameter Tuning with Grid Search

mlr_tuners_grid_searchR Documentation

Description

Subclass for grid search tuning.

Details

The grid is constructed as a Cartesian product over discretized values per parameter, see paradox::generate_design_grid(). If the learner supports hotstarting, the grid is sorted by the hotstart parameter (see also mlr3::HotstartStack). If not, the points of the grid are evaluated in a random order.

Dictionary

This Tuner can be instantiated with the associated sugar function tnr():

tnr("grid_search")

Control Parameters

resolution

integer(1)
Resolution of the grid, see paradox::generate_design_grid().

param_resolutions

named integer()
Resolution per parameter, named by parameter ID, see paradox::generate_design_grid().

batch_size

integer(1)
Maximum number of points to try in a batch.

Progress Bars

⁠$optimize()⁠ supports progress bars via the package progressr combined with a bbotk::Terminator. Simply wrap the function in progressr::with_progress() to enable them. We recommend to use package progress as backend; enable with progressr::handlers("progress").

Parallelization

In order to support general termination criteria and parallelization, we evaluate points in a batch-fashion of size batch_size. Larger batches mean we can parallelize more, smaller batches imply a more fine-grained checking of termination criteria. A batch contains of batch_size times resampling$iters jobs. E.g., if you set a batch size of 10 points and do a 5-fold cross validation, you can utilize up to 50 cores.

Parallelization is supported via package future (see mlr3::benchmark()'s section on parallelization for more details).

Logging

All Tuners use a logger (as implemented in lgr) from package bbotk. Use lgr::get_logger("bbotk") to access and control the logger.

Optimizer

This Tuner is based on bbotk::OptimizerBatchGridSearch which can be applied on any black box optimization problem. See also the documentation of bbotk.

Resources

There are several sections about hyperparameter optimization in the mlr3book.

  • An overview of all tuners can be found on our website.

  • Learn more about tuners.

The gallery features a collection of case studies and demos about optimization.

  • Use the Hyperband optimizer with different budget parameters.

Super classes

mlr3tuning::Tuner -> mlr3tuning::TunerBatch -> mlr3tuning::TunerBatchFromOptimizerBatch -> TunerBatchGridSearch

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
TunerBatchGridSearch$new()

Method clone()

The objects of this class are cloneable with this method.

Usage
TunerBatchGridSearch$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other Tuner: Tuner, mlr_tuners, mlr_tuners_cmaes, mlr_tuners_design_points, mlr_tuners_gensa, mlr_tuners_internal, mlr_tuners_irace, mlr_tuners_nloptr, mlr_tuners_random_search

Examples

# Hyperparameter Optimization

# load learner and set search space
learner = lrn("classif.rpart",
  cp = to_tune(1e-04, 1e-1, logscale = TRUE)
)

# run hyperparameter tuning on the Palmer Penguins data set
instance = tune(
  tuner = tnr("grid_search"),
  task = tsk("penguins"),
  learner = learner,
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  term_evals = 10
)

# best performing hyperparameter configuration
instance$result

# all evaluated hyperparameter configuration
as.data.table(instance$archive)

# fit final model on complete data set
learner$param_set$values = instance$result_learner_param_vals
learner$train(tsk("penguins"))

mlr3tuning documentation built on Oct. 27, 2024, 9:06 a.m.