tune_bayes  R Documentation 
tune_bayes()
uses models to generate new candidate tuning parameter
combinations based on previous results.
tune_bayes(object, ...)
## S3 method for class 'model_spec'
tune_bayes(
object,
preprocessor,
resamples,
...,
iter = 10,
param_info = NULL,
metrics = NULL,
eval_time = NULL,
objective = exp_improve(),
initial = 5,
control = control_bayes()
)
## S3 method for class 'workflow'
tune_bayes(
object,
resamples,
...,
iter = 10,
param_info = NULL,
metrics = NULL,
eval_time = NULL,
objective = exp_improve(),
initial = 5,
control = control_bayes()
)
object 
A 
... 
Options to pass to 
preprocessor 
A traditional model formula or a recipe created using

resamples 
An 
iter 
The maximum number of search iterations. 
param_info 
A 
metrics 
A 
eval_time 
A numeric vector of time points where dynamic event time metrics should be computed (e.g. the timedependent ROC curve, etc). The values must be nonnegative and should probably be no greater than the largest event time in the training set (See Details below). 
objective 
A character string for what metric should be optimized or an acquisition function object. 
initial 
An initial set of results in a tidy format (as would result
from 
control 
A control object created by 
The optimization starts with a set of initial results, such as those
generated by tune_grid()
. If none exist, the function will create several
combinations and obtain their performance estimates.
Using one of the performance estimates as the model outcome, a Gaussian process (GP) model is created where the previous tuning parameter combinations are used as the predictors.
A large grid of potential hyperparameter combinations is predicted using
the model and scored using an acquisition function. These functions
usually combine the predicted mean and variance of the GP to decide the best
parameter combination to try next. For more information, see the
documentation for exp_improve()
and the corresponding package vignette.
The best combination is evaluated using resampling and the process continues.
A tibble of results that mirror those generated by tune_grid()
.
However, these results contain an .iter
column and replicate the rset
object multiple times over iterations (at limited additional memory costs).
tune supports parallel processing with the future package. To execute
the resampling iterations in parallel, specify a plan with
future first. The allow_par
argument can be used to avoid parallelism.
For the most part, warnings generated during training are shown as they occur
and are associated with a specific resample when
control_bayes(verbose = TRUE)
. They are (usually) not aggregated until the
end of processing.
For Bayesian optimization, parallel processing is used to estimate the resampled performance values once a new candidate set of values are estimated.
The results of tune_grid()
, or a previous run of tune_bayes()
can be used
in the initial
argument. initial
can also be a positive integer. In this
case, a spacefilling design will be used to populate a preliminary set of
results. For good results, the number of initial values should be more than
the number of parameters being optimized.
In some cases, the tuning parameter values depend on the dimensions of the
data (they are said to contain unknown values). For
example, mtry
in random forest models depends on the number of predictors.
In such cases, the unknowns in the tuning parameter object must be determined
beforehand and passed to the function via the param_info
argument.
dials::finalize()
can be used to derive the datadependent parameters.
Otherwise, a parameter set can be created via dials::parameters()
, and the
dials
update()
function can be used to specify the ranges or values.
To use your own performance metrics, the yardstick::metric_set()
function
can be used to pick what should be measured for each model. If multiple
metrics are desired, they can be bundled. For example, to estimate the area
under the ROC curve as well as the sensitivity and specificity (under the
typical probability cutoff of 0.50), the metrics
argument could be given:
metrics = metric_set(roc_auc, sens, spec)
Each metric is calculated for each candidate model.
If no metric set is provided, one is created:
For regression models, the root mean squared error and coefficient of determination are computed.
For classification, the area under the ROC curve and overall accuracy are computed.
Note that the metrics also determine what type of predictions are estimated during tuning. For example, in a classification problem, if metrics are used that are all associated with hard class predictions, the classification probabilities are not created.
The outofsample estimates of these metrics are contained in a list column
called .metrics
. This tibble contains a row for each metric and columns
for the value, the estimator type, and so on.
collect_metrics()
can be used for these objects to collapse the results
over the resampled (to obtain the final resampling estimates per tuning
parameter combination).
When control_bayes(save_pred = TRUE)
, the output tibble contains a list
column called .predictions
that has the outofsample predictions for each
parameter combination in the grid and each fold (which can be very large).
The elements of the tibble are tibbles with columns for the tuning
parameters, the row number from the original data object (.row
), the
outcome data (with the same name(s) of the original data), and any columns
created by the predictions. For example, for simple regression problems, this
function generates a column called .pred
and so on. As noted above, the
prediction columns that are returned are determined by the type of metric(s)
requested.
This list column can be unnested
using tidyr::unnest()
or using the
convenience function collect_predictions()
.
Some models can utilize case weights during training. tidymodels currently supports two types of case weights: importance weights (doubles) and frequency weights (integers). Frequency weights are used during model fitting and evaluation, whereas importance weights are only used during fitting.
To know if your model is capable of using case weights, create a model spec
and test it using parsnip::case_weights_allowed()
.
To use them, you will need a numeric column in your data set that has been
passed through either hardhat:: importance_weights()
or
hardhat::frequency_weights()
.
For functions such as fit_resamples()
and the tune_*()
functions, the
model must be contained inside of a workflows::workflow()
. To declare that
case weights are used, invoke workflows::add_case_weights()
with the
corresponding (unquoted) column name.
From there, the packages will appropriately handle the weights during model fitting and (if appropriate) performance estimation.
Three types of metrics can be used to assess the quality of censored regression models:
static: the prediction is independent of time.
dynamic: the prediction is a timespecific probability (e.g., survival probability) and is measured at one or more particular times.
integrated: same as the dynamic metric but returns the integral of the different metrics from each time point.
Which metrics are chosen by the user affects how many evaluation times should be specified. For example:
# Needs no `eval_time` value metric_set(concordance_survival) # Needs at least one `eval_time` metric_set(brier_survival) metric_set(brier_survival, concordance_survival) # Needs at least two eval_time` values metric_set(brier_survival_integrated, concordance_survival) metric_set(brier_survival_integrated, concordance_survival) metric_set(brier_survival_integrated, concordance_survival, brier_survival)
Values of eval_time
should be less than the largest observed event
time in the training data. For many nonparametric models, the results beyond
the largest time corresponding to an event are constant (or NA
).
With dynamic performance metrics (e.g. Brier or ROC curves), performance is
calculated for every value of eval_time
but the first evaluation time
given by the user (e.g., eval_time[1]
) is used to guide the optimization.
The extract
control option will result in an additional function to be
returned called .extracts
. This is a list column that has tibbles
containing the results of the user's function for each tuning parameter
combination. This can enable returning each model and/or recipe object that
is created during resampling. Note that this could result in a large return
object, depending on what is returned.
The control function contains an option (extract
) that can be used to
retain any model or recipe that was created within the resamples. This
argument should be a function with a single argument. The value of the
argument that is given to the function in each resample is a workflow
object (see workflows::workflow()
for more information). Several
helper functions can be used to easily pull out the preprocessing
and/or model information from the workflow, such as
extract_preprocessor()
and
extract_fit_parsnip()
.
As an example, if there is interest in getting each parsnip model fit back, one could use:
extract = function (x) extract_fit_parsnip(x)
Note that the function given to the extract
argument is evaluated on
every model that is fit (as opposed to every model that is evaluated).
As noted above, in some cases, model predictions can be derived for
submodels so that, in these cases, not every row in the tuning parameter
grid has a separate R object associated with it.
control_bayes()
, tune()
, autoplot.tune_results()
,
show_best()
, select_best()
, collect_predictions()
,
collect_metrics()
, prob_improve()
, exp_improve()
, conf_bound()
,
fit_resamples()
library(recipes)
library(rsample)
library(parsnip)
# define resamples and minimal recipe on mtcars
set.seed(6735)
folds < vfold_cv(mtcars, v = 5)
car_rec <
recipe(mpg ~ ., data = mtcars) %>%
step_normalize(all_predictors())
# define an svm with parameters to tune
svm_mod <
svm_rbf(cost = tune(), rbf_sigma = tune()) %>%
set_engine("kernlab") %>%
set_mode("regression")
# use a spacefilling design with 6 points
set.seed(3254)
svm_grid < tune_grid(svm_mod, car_rec, folds, grid = 6)
show_best(svm_grid, metric = "rmse")
# use bayesian optimization to evaluate at 6 more points
set.seed(8241)
svm_bayes < tune_bayes(svm_mod, car_rec, folds, initial = svm_grid, iter = 6)
# note that bayesian optimization evaluated parameterizations
# similar to those that previously decreased rmse in svm_grid
show_best(svm_bayes, metric = "rmse")
# specifying `initial` as a numeric rather than previous tuning results
# will result in `tune_bayes` initially evaluating an spacefilling
# grid using `tune_grid` with `grid = initial`
set.seed(0239)
svm_init < tune_bayes(svm_mod, car_rec, folds, initial = 6, iter = 6)
show_best(svm_init, metric = "rmse")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.