mlr_tuners_async_successive_halving: Asynchronous Hyperparameter Tuning with Successive Halving

mlr_tuners_async_successive_halvingR Documentation

Asynchronous Hyperparameter Tuning with Successive Halving

Description

OptimizerAsyncSuccessiveHalving class that implements the Asynchronous Successive Halving Algorithm (ASHA). This class implements the asynchronous version of OptimizerBatchSuccessiveHalving.

Dictionary

This mlr3tuning::Tuner can be instantiated via the dictionary mlr3tuning::mlr_tuners or with the associated sugar function mlr3tuning::tnr():

TunerAsyncSuccessiveHalving$new()
mlr_tuners$get("async_successive_halving")
tnr("async_successive_halving")

Subsample Budget

If the learner lacks a natural budget parameter, mlr3pipelines::PipeOpSubsample can be applied to use the subsampling rate as budget parameter. The resulting mlr3pipelines::GraphLearner is fitted on small proportions of the mlr3::Task in the first stage, and on the complete task in last stage.

Custom Sampler

Hyperband supports custom paradox::Sampler object for initial configurations in each bracket. A custom sampler may look like this (the full example is given in the examples section):

# - beta distribution with alpha = 2 and beta = 5
# - categorical distribution with custom probabilities
sampler = SamplerJointIndep$new(list(
  Sampler1DRfun$new(params[[2]], function(n) rbeta(n, 2, 5)),
  Sampler1DCateg$new(params[[3]], prob = c(0.2, 0.3, 0.5))
))

Parameters

eta

numeric(1)
With every stage, the budget is increased by a factor of eta and only the best 1 / eta configurations are promoted to the next stage. Non-integer values are supported, but eta is not allowed to be less or equal to 1.

sampler

paradox::Sampler
Object defining how the samples of the parameter space should be drawn. The default is uniform sampling.

Archive

The bbotk::Archive holds the following additional columns that are specific to SHA:

  • stage (⁠integer(1))⁠
    Stage index. Starts counting at 0.

  • asha_id (⁠character(1))⁠
    Unique identifier for each configuration across stages.

Super classes

mlr3tuning::Tuner -> mlr3tuning::TunerAsync -> mlr3tuning::TunerAsyncFromOptimizerAsync -> TunerAsyncSuccessiveHalving

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage
TunerAsyncSuccessiveHalving$new()

Method clone()

The objects of this class are cloneable with this method.

Usage
TunerAsyncSuccessiveHalving$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

Source

Li L, Jamieson K, Rostamizadeh A, Gonina E, Ben-tzur J, Hardt M, Recht B, Talwalkar A (2020). “A System for Massively Parallel Hyperparameter Tuning.” In Dhillon I, Papailiopoulos D, Sze V (eds.), Proceedings of Machine Learning and Systems, volume 2, 230–246. https://proceedings.mlsys.org/paper_files/paper/2020/hash/a06f20b349c6cf09a6b171c71b88bbfc-Abstract.html.


mlr-org/mlr3hyperband documentation built on Feb. 12, 2025, 5:33 p.m.