View source: R/discretize_xgb.R
step_discretize_xgb  R Documentation 
step_discretize_xgb()
creates a specification of a recipe step that will
discretize numeric data (e.g. integers or doubles) into bins in a supervised
way using an XgBoost model.
step_discretize_xgb(
recipe,
...,
role = NA,
trained = FALSE,
outcome = NULL,
sample_val = 0.2,
learn_rate = 0.3,
num_breaks = 10,
tree_depth = 1,
min_n = 5,
rules = NULL,
skip = FALSE,
id = rand_id("discretize_xgb")
)
recipe 
A recipe object. The step will be added to the sequence of operations for this recipe. 
... 
One or more selector functions to choose which variables are
affected by the step. See 
role 
Defaults to 
trained 
A logical to indicate if the quantities for preprocessing have been estimated. 
outcome 
A call to 
sample_val 
Share of data used for validation (with early stopping) of the learned splits (the rest is used for training). Defaults to 0.20. 
learn_rate 
The rate at which the boosting algorithm adapts from
iterationtoiteration. Corresponds to 
num_breaks 
The maximum number of discrete bins to bucket continuous
features. Corresponds to 
tree_depth 
The maximum depth of the tree (i.e. number of splits).
Corresponds to 
min_n 
The minimum number of instances needed to be in each node.
Corresponds to 
rules 
The splitting rules of the best XgBoost tree to retain for each variable. 
skip 
A logical. Should the step be skipped when the recipe is baked by

id 
A character string that is unique to this step to identify it. 
step_discretize_xgb()
creates nonuniform bins from numerical variables by
utilizing the information about the outcome variable and applying the xgboost
model. It is advised to impute missing values before this step. This step is
intended to be used particularly with linear models because thanks to
creating nonuniform bins it becomes easier to learn nonlinear patterns from
the data.
The best selection of buckets for each variable is selected using an internal early stopping scheme implemented in the xgboost package, which makes this discretization method prone to overfitting.
The predefined values of the underlying xgboost learns good and reasonably
complex results. However, if one wishes to tune them the recommended path
would be to first start with changing the value of num_breaks
to e.g.: 20
or 30. If that doesn't give satisfactory results one could experiment with
modifying the tree_depth
or min_n
parameters. Note that it is not
recommended to tune learn_rate
simultaneously with other parameters.
This step requires the xgboost package. If not installed, the step will stop with a note about installing the package.
Note that the original data will be replaced with the new bins.
An updated version of recipe
with the new step added to the
sequence of any existing operations.
When you tidy()
this step, a tibble with columns terms
(the columns that is selected), values
is returned.
This step has 5 tuning parameters:
sample_val
: Proportion of data for validation (type: double, default: 0.2)
learn_rate
: Learning Rate (type: double, default: 0.3)
num_breaks
: Number of Cut Points (type: integer, default: 10)
tree_depth
: Tree Depth (type: integer, default: 1)
min_n
: Minimal Node Size (type: integer, default: 5)
This step performs an supervised operation that can utilize case weights.
To use them, see the documentation in recipes::case_weights and the examples on
tidymodels.org
.
step_discretize_cart()
, recipes::recipe()
,
recipes::prep()
, recipes::bake()
library(rsample)
library(recipes)
data(credit_data, package = "modeldata")
set.seed(1234)
split < initial_split(credit_data[1:1000, ], strata = "Status")
credit_data_tr < training(split)
credit_data_te < testing(split)
xgb_rec <
recipe(Status ~ Income + Assets, data = credit_data_tr) %>%
step_impute_median(Income, Assets) %>%
step_discretize_xgb(Income, Assets, outcome = "Status")
xgb_rec < prep(xgb_rec, training = credit_data_tr)
bake(xgb_rec, credit_data_te, Assets)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.