Description Usage Arguments Details Value Examples
Fits and tune hyper-parameters of five commonly used classifiers on data and compare their performance based on AROC.
1 2 3 4 5 6 7 8 9 10 | compare_tuned_classifiers(
recipe,
train_df = NULL,
test_df,
tune_metric = "f_meas",
target_lab = 1,
cv_fold_n = 5,
tune_n = 10,
parallel = FALSE
)
|
recipe |
A parsnip recipe object. |
train_df |
Train data frame to train the models on. If no train_df is provided the function will try to extract training data from recipe. |
test_df |
Test data frame to test model performances. |
tune_metric |
Name of the metric that needs to be used to tune models. Available metric names = "roc_auc", "f_meas", "bal_accuracy", "pr_auc". Default value = "f_meas". |
target_lab |
Label used in the target feature to indicate positive outcome. Default value is Y. |
cv_fold_n |
How many folds to be used for cross validation. Default value is 5. |
tune_n |
How many total combination of the hyper-parameter values to be tried. Default value is 10. |
parallel |
Want to run the function using parallel? Default mode is FALSE. |
Make sure the recipe object entered is at pre-prepped stage
Five models compared are: logistic regression, elastic net, random forest, support vector machine, xtreme gradient boosting.
Hyper-parameter values are created using 'tune' package's random default parameter generator: grid inside the tune_grid() function.
Putting very large number of tune_n will slow the process and potentially might break. Try using managable numbers e.g. 10 or 20.
Returns following outputs:
A line plot showing comparative area under the ROC curve score
A tibble containing the test data along with the predicted probability calculated by the five classifiers.
Four tibbles containing the hyper-parameter tuning outcome. As it's produced by the tune::tune_grid() function.
Final fitted models with the best tuned parameters.
1 2 3 4 5 6 7 8 9 10 11 | library(tidymodels)
split <- rsample::initial_split(wine, strata = quality_bin)
train <- rsample::training(split)
test <- rsample::testing(split)
recipe <- recipes::recipe(quality_bin ~ ., data = train) %>%
update_role(ID, new_role = 'identification') %>%
step_string2factor(all_nominal()) %>%
step_knnimpute(all_predictors()) %>%
step_normalize(all_numeric())
compare_tuned_classifiers(recipe = recipe, test_df = test, tune_metric = "f_meas", target_lab = 1, parallel = FALSE)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.