champion_challenger: Compare machine learning models

Description Usage Arguments Value Examples

View source: R/champion_challenger.R

Description

Determining if one model is better than the other one is a difficult task. Mostly because there is a lot of fields that have to be covered to make such a judgemnt. Overall performance, performance on the crucial subset, distribution of residuals, those are only few among many ideas related to that issue. Following function allow user to create a report based on various sections. Each says something different about relation between champion and challengers. DALEXtra package share 3 base sections which are funnel_measure overall_comparison and training_test_comparison but any object that has generic plot function can be inculded at report.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
champion_challenger(
  sections,
  dot_size = 4,
  output_dir_path = getwd(),
  output_name = "Report",
  model_performance_table = FALSE,
  title = "ChampionChallenger",
  author = Sys.info()[["user"]],
  ...
)

Arguments

sections

- list of sections to be attached to report. Could be sections available with DALEXtra which are funnel_measure training_test_comparison, overall_comparison or any other explanation that can work with plot function. Please provide name for not standard sections, that will be presented as section titles. Oterwise class of the object will be used.

dot_size

- dot_size argument passed to plot.funnel_measure if funnel_measure section present

output_dir_path

- path to directory where Report should be created. By default it is current working directory.

output_name

- name of the Report. By default it is "Report"

model_performance_table

- If TRUE and overall_comparison section present, table of scores will be displayed.

title

- Title for report, by default it is "ChampionChallenger".

author

- Author of , report. By default it is current user name.

...

- other parameters passed to rmarkdown::render.

Value

rmarkdown report

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
library("mlr")
library("DALEXtra")
task <- mlr::makeRegrTask(
 id = "R",
  data = apartments,
   target = "m2.price"
 )
 learner_lm <- mlr::makeLearner(
   "regr.lm"
 )
 model_lm <- mlr::train(learner_lm, task)
 explainer_lm <- explain_mlr(model_lm, apartmentsTest, apartmentsTest$m2.price, label = "LM")

 learner_rf <- mlr::makeLearner(
 "regr.randomForest"
 )
 model_rf <- mlr::train(learner_rf, task)
 explainer_rf <- explain_mlr(model_rf, apartmentsTest, apartmentsTest$m2.price, label = "RF")

 learner_gbm <- mlr::makeLearner(
 "regr.gbm"
 )
 model_gbm <- mlr::train(learner_gbm, task)
 explainer_gbm <- explain_mlr(model_gbm, apartmentsTest, apartmentsTest$m2.price, label = "GBM")


 plot_data <- funnel_measure(explainer_lm, list(explainer_rf, explainer_gbm),
                          nbins = 5, measure_function = DALEX::loss_root_mean_square)

champion_challenger(list(plot_data), dot_size = 3)

DALEXtra documentation built on May 9, 2021, 9:07 a.m.