Nothing
Code
set.seed(2)
res <- tune_bayes(wflow, resamples = folds, param_info = pset, initial = iter1,
iter = iter2, control = control)
Code
tune_bayes(wflow, resamples = folds, param_info = pset, initial = iter1, iter = iter2,
control = control_bayes(verbose = TRUE))
Message
> Generating a set of 2 initial parameter results
v Initialization complete
i Estimating performance
i Fold01: preprocessor 1/1
i Fold01: preprocessor 1/1 (prediction data)
i Fold01: preprocessor 1/1, model 1/1
i Fold01: preprocessor 1/1, model 1/1 (predictions)
i Fold02: preprocessor 1/1
i Fold02: preprocessor 1/1 (prediction data)
i Fold02: preprocessor 1/1, model 1/1
i Fold02: preprocessor 1/1, model 1/1 (predictions)
i Fold03: preprocessor 1/1
i Fold03: preprocessor 1/1 (prediction data)
i Fold03: preprocessor 1/1, model 1/1
i Fold03: preprocessor 1/1, model 1/1 (predictions)
i Fold04: preprocessor 1/1
i Fold04: preprocessor 1/1 (prediction data)
i Fold04: preprocessor 1/1, model 1/1
i Fold04: preprocessor 1/1, model 1/1 (predictions)
i Fold05: preprocessor 1/1
i Fold05: preprocessor 1/1 (prediction data)
i Fold05: preprocessor 1/1, model 1/1
i Fold05: preprocessor 1/1, model 1/1 (predictions)
i Fold06: preprocessor 1/1
i Fold06: preprocessor 1/1 (prediction data)
i Fold06: preprocessor 1/1, model 1/1
i Fold06: preprocessor 1/1, model 1/1 (predictions)
i Fold07: preprocessor 1/1
i Fold07: preprocessor 1/1 (prediction data)
i Fold07: preprocessor 1/1, model 1/1
i Fold07: preprocessor 1/1, model 1/1 (predictions)
i Fold08: preprocessor 1/1
i Fold08: preprocessor 1/1 (prediction data)
i Fold08: preprocessor 1/1, model 1/1
i Fold08: preprocessor 1/1, model 1/1 (predictions)
i Fold09: preprocessor 1/1
i Fold09: preprocessor 1/1 (prediction data)
i Fold09: preprocessor 1/1, model 1/1
i Fold09: preprocessor 1/1, model 1/1 (predictions)
i Fold10: preprocessor 1/1
i Fold10: preprocessor 1/1 (prediction data)
i Fold10: preprocessor 1/1, model 1/1
i Fold10: preprocessor 1/1, model 1/1 (predictions)
v Estimating performance
i Estimating performance
i Fold01: preprocessor 1/1
i Fold01: preprocessor 1/1 (prediction data)
i Fold01: preprocessor 1/1, model 1/1
i Fold01: preprocessor 1/1, model 1/1 (predictions)
i Fold02: preprocessor 1/1
i Fold02: preprocessor 1/1 (prediction data)
i Fold02: preprocessor 1/1, model 1/1
i Fold02: preprocessor 1/1, model 1/1 (predictions)
i Fold03: preprocessor 1/1
i Fold03: preprocessor 1/1 (prediction data)
i Fold03: preprocessor 1/1, model 1/1
i Fold03: preprocessor 1/1, model 1/1 (predictions)
i Fold04: preprocessor 1/1
i Fold04: preprocessor 1/1 (prediction data)
i Fold04: preprocessor 1/1, model 1/1
i Fold04: preprocessor 1/1, model 1/1 (predictions)
i Fold05: preprocessor 1/1
i Fold05: preprocessor 1/1 (prediction data)
i Fold05: preprocessor 1/1, model 1/1
i Fold05: preprocessor 1/1, model 1/1 (predictions)
i Fold06: preprocessor 1/1
i Fold06: preprocessor 1/1 (prediction data)
i Fold06: preprocessor 1/1, model 1/1
i Fold06: preprocessor 1/1, model 1/1 (predictions)
i Fold07: preprocessor 1/1
i Fold07: preprocessor 1/1 (prediction data)
i Fold07: preprocessor 1/1, model 1/1
i Fold07: preprocessor 1/1, model 1/1 (predictions)
i Fold08: preprocessor 1/1
i Fold08: preprocessor 1/1 (prediction data)
i Fold08: preprocessor 1/1, model 1/1
i Fold08: preprocessor 1/1, model 1/1 (predictions)
i Fold09: preprocessor 1/1
i Fold09: preprocessor 1/1 (prediction data)
i Fold09: preprocessor 1/1, model 1/1
i Fold09: preprocessor 1/1, model 1/1 (predictions)
i Fold10: preprocessor 1/1
i Fold10: preprocessor 1/1 (prediction data)
i Fold10: preprocessor 1/1, model 1/1
i Fold10: preprocessor 1/1, model 1/1 (predictions)
v Estimating performance
Output
# Tuning results
# 10-fold cross-validation
# A tibble: 30 x 5
splits id .metrics .notes .iter
<list> <chr> <list> <list> <int>
1 <split [28/4]> Fold01 <tibble [4 x 5]> <tibble [0 x 4]> 0
2 <split [28/4]> Fold02 <tibble [4 x 5]> <tibble [0 x 4]> 0
3 <split [29/3]> Fold03 <tibble [4 x 5]> <tibble [0 x 4]> 0
4 <split [29/3]> Fold04 <tibble [4 x 5]> <tibble [0 x 4]> 0
5 <split [29/3]> Fold05 <tibble [4 x 5]> <tibble [0 x 4]> 0
6 <split [29/3]> Fold06 <tibble [4 x 5]> <tibble [0 x 4]> 0
7 <split [29/3]> Fold07 <tibble [4 x 5]> <tibble [0 x 4]> 0
8 <split [29/3]> Fold08 <tibble [4 x 5]> <tibble [0 x 4]> 0
9 <split [29/3]> Fold09 <tibble [4 x 5]> <tibble [0 x 4]> 0
10 <split [29/3]> Fold10 <tibble [4 x 5]> <tibble [0 x 4]> 0
# i 20 more rows
Code
tune_bayes(wflow, resamples = folds, param_info = pset, initial = iter1, iter = iter2,
control = control_bayes(verbose_iter = TRUE))
Message
Optimizing rmse using the expected improvement
-- Iteration 1 -----------------------------------------------------------------
i Current best: rmse=2.505 (@iter 0)
(x) GP has a LOO R² of 0% and is unreliable.
v Gaussian process model failed
i Generating a candidate as far away from existing points as possible.
i Uncertainty sample
i num_comp=11
i Estimating performance
v Estimating performance
(x) Newest results: rmse=3.589 (+/-0.499)
-- Iteration 2 -----------------------------------------------------------------
i Current best: rmse=2.505 (@iter 0)
v Gaussian process model (LOO R²: 30.5%)
i Generating 15 candidates.
i num_comp=4
i Estimating performance
v Estimating performance
<3 Newest results: rmse=2.461 (+/-0.37)
Output
# Tuning results
# 10-fold cross-validation
# A tibble: 30 x 5
splits id .metrics .notes .iter
<list> <chr> <list> <list> <int>
1 <split [28/4]> Fold01 <tibble [4 x 5]> <tibble [0 x 4]> 0
2 <split [28/4]> Fold02 <tibble [4 x 5]> <tibble [0 x 4]> 0
3 <split [29/3]> Fold03 <tibble [4 x 5]> <tibble [0 x 4]> 0
4 <split [29/3]> Fold04 <tibble [4 x 5]> <tibble [0 x 4]> 0
5 <split [29/3]> Fold05 <tibble [4 x 5]> <tibble [0 x 4]> 0
6 <split [29/3]> Fold06 <tibble [4 x 5]> <tibble [0 x 4]> 0
7 <split [29/3]> Fold07 <tibble [4 x 5]> <tibble [0 x 4]> 0
8 <split [29/3]> Fold08 <tibble [4 x 5]> <tibble [0 x 4]> 0
9 <split [29/3]> Fold09 <tibble [4 x 5]> <tibble [0 x 4]> 0
10 <split [29/3]> Fold10 <tibble [4 x 5]> <tibble [0 x 4]> 0
# i 20 more rows
Code
tune_bayes(wflow, resamples = folds, param_info = pset, initial = iter1, iter = iter2,
control = control_bayes(verbose_iter = TRUE, verbose = TRUE))
Message
> Generating a set of 2 initial parameter results
v Initialization complete
Optimizing rmse using the expected improvement
-- Iteration 1 -----------------------------------------------------------------
i Current best: rmse=2.505 (@iter 0)
(x) GP has a LOO R² of 0% and is unreliable.
v Gaussian process model failed
i Generating a candidate as far away from existing points as possible.
i Uncertainty sample
i num_comp=11
i Estimating performance
i Fold01: preprocessor 1/1
i Fold01: preprocessor 1/1 (prediction data)
i Fold01: preprocessor 1/1, model 1/1
i Fold01: preprocessor 1/1, model 1/1 (predictions)
i Fold02: preprocessor 1/1
i Fold02: preprocessor 1/1 (prediction data)
i Fold02: preprocessor 1/1, model 1/1
i Fold02: preprocessor 1/1, model 1/1 (predictions)
i Fold03: preprocessor 1/1
i Fold03: preprocessor 1/1 (prediction data)
i Fold03: preprocessor 1/1, model 1/1
i Fold03: preprocessor 1/1, model 1/1 (predictions)
i Fold04: preprocessor 1/1
i Fold04: preprocessor 1/1 (prediction data)
i Fold04: preprocessor 1/1, model 1/1
i Fold04: preprocessor 1/1, model 1/1 (predictions)
i Fold05: preprocessor 1/1
i Fold05: preprocessor 1/1 (prediction data)
i Fold05: preprocessor 1/1, model 1/1
i Fold05: preprocessor 1/1, model 1/1 (predictions)
i Fold06: preprocessor 1/1
i Fold06: preprocessor 1/1 (prediction data)
i Fold06: preprocessor 1/1, model 1/1
i Fold06: preprocessor 1/1, model 1/1 (predictions)
i Fold07: preprocessor 1/1
i Fold07: preprocessor 1/1 (prediction data)
i Fold07: preprocessor 1/1, model 1/1
i Fold07: preprocessor 1/1, model 1/1 (predictions)
i Fold08: preprocessor 1/1
i Fold08: preprocessor 1/1 (prediction data)
i Fold08: preprocessor 1/1, model 1/1
i Fold08: preprocessor 1/1, model 1/1 (predictions)
i Fold09: preprocessor 1/1
i Fold09: preprocessor 1/1 (prediction data)
i Fold09: preprocessor 1/1, model 1/1
i Fold09: preprocessor 1/1, model 1/1 (predictions)
i Fold10: preprocessor 1/1
i Fold10: preprocessor 1/1 (prediction data)
i Fold10: preprocessor 1/1, model 1/1
i Fold10: preprocessor 1/1, model 1/1 (predictions)
v Estimating performance
(x) Newest results: rmse=3.589 (+/-0.499)
-- Iteration 2 -----------------------------------------------------------------
i Current best: rmse=2.505 (@iter 0)
v Gaussian process model (LOO R²: 30.5%)
i Generating 15 candidates.
i num_comp=4
i Estimating performance
i Fold01: preprocessor 1/1
i Fold01: preprocessor 1/1 (prediction data)
i Fold01: preprocessor 1/1, model 1/1
i Fold01: preprocessor 1/1, model 1/1 (predictions)
i Fold02: preprocessor 1/1
i Fold02: preprocessor 1/1 (prediction data)
i Fold02: preprocessor 1/1, model 1/1
i Fold02: preprocessor 1/1, model 1/1 (predictions)
i Fold03: preprocessor 1/1
i Fold03: preprocessor 1/1 (prediction data)
i Fold03: preprocessor 1/1, model 1/1
i Fold03: preprocessor 1/1, model 1/1 (predictions)
i Fold04: preprocessor 1/1
i Fold04: preprocessor 1/1 (prediction data)
i Fold04: preprocessor 1/1, model 1/1
i Fold04: preprocessor 1/1, model 1/1 (predictions)
i Fold05: preprocessor 1/1
i Fold05: preprocessor 1/1 (prediction data)
i Fold05: preprocessor 1/1, model 1/1
i Fold05: preprocessor 1/1, model 1/1 (predictions)
i Fold06: preprocessor 1/1
i Fold06: preprocessor 1/1 (prediction data)
i Fold06: preprocessor 1/1, model 1/1
i Fold06: preprocessor 1/1, model 1/1 (predictions)
i Fold07: preprocessor 1/1
i Fold07: preprocessor 1/1 (prediction data)
i Fold07: preprocessor 1/1, model 1/1
i Fold07: preprocessor 1/1, model 1/1 (predictions)
i Fold08: preprocessor 1/1
i Fold08: preprocessor 1/1 (prediction data)
i Fold08: preprocessor 1/1, model 1/1
i Fold08: preprocessor 1/1, model 1/1 (predictions)
i Fold09: preprocessor 1/1
i Fold09: preprocessor 1/1 (prediction data)
i Fold09: preprocessor 1/1, model 1/1
i Fold09: preprocessor 1/1, model 1/1 (predictions)
i Fold10: preprocessor 1/1
i Fold10: preprocessor 1/1 (prediction data)
i Fold10: preprocessor 1/1, model 1/1
i Fold10: preprocessor 1/1, model 1/1 (predictions)
v Estimating performance
<3 Newest results: rmse=2.461 (+/-0.37)
Output
# Tuning results
# 10-fold cross-validation
# A tibble: 30 x 5
splits id .metrics .notes .iter
<list> <chr> <list> <list> <int>
1 <split [28/4]> Fold01 <tibble [4 x 5]> <tibble [0 x 4]> 0
2 <split [28/4]> Fold02 <tibble [4 x 5]> <tibble [0 x 4]> 0
3 <split [29/3]> Fold03 <tibble [4 x 5]> <tibble [0 x 4]> 0
4 <split [29/3]> Fold04 <tibble [4 x 5]> <tibble [0 x 4]> 0
5 <split [29/3]> Fold05 <tibble [4 x 5]> <tibble [0 x 4]> 0
6 <split [29/3]> Fold06 <tibble [4 x 5]> <tibble [0 x 4]> 0
7 <split [29/3]> Fold07 <tibble [4 x 5]> <tibble [0 x 4]> 0
8 <split [29/3]> Fold08 <tibble [4 x 5]> <tibble [0 x 4]> 0
9 <split [29/3]> Fold09 <tibble [4 x 5]> <tibble [0 x 4]> 0
10 <split [29/3]> Fold10 <tibble [4 x 5]> <tibble [0 x 4]> 0
# i 20 more rows
Code
cars_init_res <- tune_grid(model, preprocessor = rec, resamples = data_folds,
grid = cars_grid)
Message
> A | error: Error in `step_spline_b()`:
Caused by error in `prep()`:
! `deg_free` must be a whole number, not a numeric `NA`.
> B | warning: Some 'x' values beyond boundary knots may cause ill-conditioned basis
functions.
Code
set.seed(283)
cars_bayes_res <- tune_bayes(model, preprocessor = rec, resamples = data_folds,
initial = cars_init_res, iter = 2)
Message
> A | error: Error in `step_spline_b()`:
Caused by error in `prep()`:
! `degree` (3) must be less than or equal to `deg_free` (2) when `complete_set = TRUE`.
Condition
Warning:
All models failed. Run `show_notes(.Last.tune.result)` for more information.
Message
> A | warning: Some 'x' values beyond boundary knots may cause ill-conditioned basis
functions.
Code
cars_res <- tune_bayes(svm_mod, preprocessor = rec, resamples = data_folds)
Message
> A | error: Error in `step_spline_b()`:
Caused by error in `prep()`:
! `deg_free` must be a whole number, not a numeric `NA`.
Condition
Warning:
All models failed. Run `show_notes(.Last.tune.result)` for more information.
Message
x Optimization stopped prematurely; returning current results.
Code
cars_res <- tune_bayes(wflow, resamples = data_folds, control = control_bayes(
extract = function(x) {
1
}, save_pred = TRUE))
Message
> A | error: The following predictor was not found in `data`: "z".
Condition
Warning:
All models failed. Run `show_notes(.Last.tune.result)` for more information.
Message
x Optimization stopped prematurely; returning current results.
Code
tune_bayes(rec_tune_1, model = lm_mod, resamples = rsample::vfold_cv(mtcars, v = 2),
param_info = extract_parameter_set_dials(rec_tune_1), iter = iter1, initial = iter2)
Condition
Error in `tune_bayes()`:
! The first argument to `tune_bayes()` should be either a model or workflow, not a <recipe> object.
Code
tune_bayes(mpg ~ ., svm_mod, resamples = rsample::vfold_cv(mtcars, v = 2),
param_info = extract_parameter_set_dials(svm_mod), initial = iter1, iter = iter2)
Condition
Error in `tune_bayes()`:
! The first argument to `tune_bayes()` should be either a model or workflow, not a <formula> object.
Code
setdiff(new_obj, current_objs)
Output
[1] "candidates" "gp_fit" "i" "score_card"
Code
res2 <- tune_bayes(wflow, resamples = folds, param_info = pset, initial = iter1,
iter = iter2, control = control_bayes(save_workflow = TRUE))
Code
tune:::check_bayes_initial_size(5, 3, FALSE)
Message
! There are 5 tuning parameters and 3 grid points were requested.
* This is likely to cause numerical issues in the first few search iterations.
Code
tune:::check_bayes_initial_size(5, 3, TRUE)
Message
! There are 5 tuning parameters and 3 grid points were requested.
* This is likely to cause numerical issues in the first few search iterations.
* With racing, only completely resampled parameters are used.
Code
tune:::check_bayes_initial_size(2, 2, FALSE)
Message
! There are 2 tuning parameters and 2 grid points were requested.
* This is likely to cause numerical issues in the first few search iterations.
Code
tune:::check_bayes_initial_size(5, 1, FALSE)
Condition
Error in `tune:::check_bayes_initial_size()`:
! There are 5 tuning parameters and 1 grid point was requested.
i The GP model requires 2+ initial points. For best performance, supply more initial points than there are tuning parameters.
Code
tune:::check_bayes_initial_size(5, 1, TRUE)
Condition
Error in `tune:::check_bayes_initial_size()`:
! There are 5 tuning parameters and 1 grid point was requested.
i The GP model requires 2+ initial points. For best performance, supply more initial points than there are tuning parameters.
i With racing, only completely resampled parameters are used.
Code
tune:::check_bayes_initial_size(1, 1, FALSE)
Condition
Error in `tune:::check_bayes_initial_size()`:
! There is 1 tuning parameter and 1 grid point was requested.
i The GP model requires 2+ initial points. For best performance, supply more initial points than there are tuning parameters.
Code
set.seed(1)
res <- tune_bayes(mod, Sale_Price ~ Neighborhood + Gr_Liv_Area + Year_Built +
Bldg_Type + Latitude + Longitude, resamples = folds, initial = 3, metrics = yardstick::metric_set(
rsq), param_info = parameters(dials::cost_complexity(c(-2, 0))))
Message
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 1 missing value was found and removed before fitting
the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 2 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 3 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 4 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 5 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 6 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 7 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 8 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 9 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! For the rsq estimates, 10 missing values were found and removed before
fitting the Gaussian process model.
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! No improvement for 10 iterations; returning current results.
Code
set.seed(2)
res_fail <- tune_bayes(mod, Sale_Price ~ Neighborhood + Gr_Liv_Area +
Year_Built + Bldg_Type + Latitude + Longitude, resamples = folds, initial = 5,
metrics = yardstick::metric_set(rsq), param_info = parameters(dials::cost_complexity(
c(0, 0.5))))
Message
> A | warning: A correlation computation is required, but `estimate` is constant and has 0 standard deviation, resulting in a divide by 0 error. `NA` will be returned.
! All of the rsq estimates were missing. The Gaussian process model cannot be
fit to the data.
(x) GP failed: Error in initialize(...) : is.numeric(X) is not TRUE
Condition
Error in `apply()`:
! dim(X) must have a positive length
Message
x Optimization stopped prematurely; returning current results.
iter edge cases (#721)Code
tune_bayes(wf, boots, iter = -1)
Condition
Error in `tune_bayes()`:
! The `iter` argument must be a non-negative integer.
Code
tune_bayes(wf, boots, iter = c(-1, 0, 1))
Condition
Error in `tune_bayes()`:
! The `iter` argument must be a non-negative integer.
Code
tune_bayes(wf, boots, iter = c(0, 1, 2))
Condition
Error in `tune_bayes()`:
! The `iter` argument must be a non-negative integer.
Code
tune_bayes(wf, boots, iter = NA)
Condition
Error in `tune_bayes()`:
! The `iter` argument must be a non-negative integer.
Code
tune_bayes(wf, boots, iter = NULL)
Condition
Error in `tune_bayes()`:
! The `iter` argument must be a non-negative integer.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.