compare_tuned_classifiers: Comparing Tuned Classifiers' Performance

Description Usage Arguments Details Value Examples

View source: R/model.R

Description

Fits and tune hyper-parameters of five commonly used classifiers on data and compare their performance based on AROC.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
compare_tuned_classifiers(
  recipe,
  train_df = NULL,
  test_df,
  tune_metric = "f_meas",
  target_lab = 1,
  cv_fold_n = 5,
  tune_n = 10,
  parallel = FALSE
)

Arguments

recipe

A parsnip recipe object.

train_df

Train data frame to train the models on. If no train_df is provided the function will try to extract training data from recipe.

test_df

Test data frame to test model performances.

tune_metric

Name of the metric that needs to be used to tune models. Available metric names = "roc_auc", "f_meas", "bal_accuracy", "pr_auc". Default value = "f_meas".

target_lab

Label used in the target feature to indicate positive outcome. Default value is Y.

cv_fold_n

How many folds to be used for cross validation. Default value is 5.

tune_n

How many total combination of the hyper-parameter values to be tried. Default value is 10.

parallel

Want to run the function using parallel? Default mode is FALSE.

Details

Value

Returns following outputs:

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
  library(tidymodels)
  split <- rsample::initial_split(wine, strata = quality_bin)
  train <- rsample::training(split)
  test <- rsample::testing(split)
  recipe <- recipes::recipe(quality_bin ~ ., data = train) %>%
  update_role(ID, new_role = 'identification') %>%
  step_string2factor(all_nominal()) %>%
  step_knnimpute(all_predictors()) %>%
  step_normalize(all_numeric())

  compare_tuned_classifiers(recipe = recipe, test_df = test, tune_metric = "f_meas", target_lab = 1, parallel = FALSE)

Curious-Joe/mixBag documentation built on Aug. 30, 2021, 6:48 p.m.