r descr_models("rule_fit", "xrf")
defaults <- tibble::tibble(parsnip = c("tree_depth", "trees", "learn_rate", "mtry", "min_n", "loss_reduction", "sample_size", "stop_iter", "penalty"), default = c("6L", "15L", "0.3", "see below", "1L", "0.0", "1.0", "Inf", "0.1")) param <- rule_fit() %>% set_engine("xrf") %>% make_parameter_list(defaults)
This model has r nrow(param)
tuning parameters:
param$item
r uses_extension("rule_fit", "xrf", "regression")
library(rules) rule_fit( mtry = numeric(1), trees = integer(1), min_n = integer(1), tree_depth = integer(1), learn_rate = numeric(1), loss_reduction = numeric(1), sample_size = numeric(1), penalty = numeric(1) ) %>% set_engine("xrf") %>% set_mode("regression") %>% translate()
r uses_extension("rule_fit", "xrf", "classification")
library(rules) rule_fit( mtry = numeric(1), trees = integer(1), min_n = integer(1), tree_depth = integer(1), learn_rate = numeric(1), loss_reduction = numeric(1), sample_size = numeric(1), penalty = numeric(1) ) %>% set_engine("xrf") %>% set_mode("classification") %>% translate()
Note that, per the documentation in ?xrf
, transformations of the response variable are not supported. To
use these with rule_fit()
, we recommend using a recipe instead of the formula method.
Also, there are several configuration differences in how xrf()
is fit between that package and the wrapper used in rules. Some differences in default values are:
| parameter | xrf | rules |
|------------|---------|-----------|
| trees
| 100 | 15 |
|max_depth
| 3 | 6 |
These differences will create a disparity in the values of the penalty
argument that glmnet uses. Also, rules can also set penalty
whereas xrf uses an internal 5-fold cross-validation to determine it (by default).
mtry
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.