knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.width = 6, fig.height = 3, fig.align = "center", out.width = "50%" # fig.path = "Readme_files/" ) library(compboost)
Compboost
comes with a variety of function to gain deeper insights into a fitted model. Using these function allows to get different views on the model.
compboost
The data set we use is mpg
:
knitr::kable(head(mtcars))
We want to model the miles per gallon (mpg
). As features we include the linear and centered spline of hp
, wt
, and qsec
. Additionally, we add a categorical base learner for the number of cylinders cyl
:
mtcars$cyl = as.factor(mtcars$cyl) set.seed(31415) cboost = Compboost$new(data = mtcars, target = "mpg", learning_rate = 0.02, oob_fraction = 0.2) cboost$addComponents("hp", df = 3) cboost$addComponents("wt", df = 3) cboost$addComponents("qsec", df = 3) cboost$addBaselearner("cyl", "ridge", BaselearnerCategoricalRidge, df = 3) cboost$train(500L, trace = 100L)
A starting point when analyzing a component-wise boosting model is to take a look at the train and validation risk:
plotRisk(cboost)
As we can see, the best validation risk is at iteration r which.min(cboost$getLoggerData()[["oob_risk"]])
. Hence, we should set the model to this iteration:
m_optimal = which.min(cboost$getLoggerData()[["oob_risk"]]) cboost$train(m_optimal)
Next, we are interested in the most important base learners/features:
plotFeatureImportance(cboost)
The last thing we can do to get a more better overview of the model is to have a look how the features/base learners were included into the model:
plotBaselearnerTraces(cboost)
Next, we want to deep dive into the effects of individual features, i.e, the effect of the base learners. To that end, we plot the partial effects of the most important feature wt
:
plotPEUni(cboost, "wt")
We observe a clear negative trend, meaning that an increasing weight indicates lower mpg
. Additionally, we can visualize individual base learners. For example the only categorical feature cyl
:
plotBaselearner(cboost, "cyl_ridge")
Here, we observe that 4 cylinder indicates a positive contribution to mpg
while 6 and 8 cylinder are reducing it.
During predictions we also want to get an idea about the specific contribution of each feature to the predicted score. Therefore, we take a look at the first observation in the validation data set:
plotIndividualContribution(cboost, newdata = cboost$data_oob[1, ])
As we can see, the prediction is dominated by the offset. To remove it from the figure we set offset = FALSE
:
plotIndividualContribution(cboost, newdata = cboost$data_oob[1, ], offset = FALSE)
The wt
and hp
have a positive contribution to the predicted score. This means the car requires less fuel while the 6 cylinder slightly increases the mpg
prediction.
The last visualization convenience wrapper is to illustrate interactions included as tensors. Therefore, we have to add tensors into the model:
mtcars$vs = as.factor(mtcars$vs) mtcars$gear = as.factor(mtcars$gear) set.seed(31415) cboost = Compboost$new(data = mtcars, target = "mpg", oob_fraction = 0.2) cboost$addTensor("wt", "qsec", df = 2) cboost$addTensor("hp", "cyl", df = 2) cboost$addTensor("gear", "vs", df = 2) cboost$train(500L, trace = 100L) table(cboost$getSelectedBaselearner())
Depending on the feature combination (numeric - numeric, numeric - categorical, categorical - categorical) a different visualization technique is used:
library(ggplot2) gg1 = plotTensor(cboost, "wt_qsec_tensor") + ggtitle("Num - Num") gg2 = plotTensor(cboost, "hp_cyl_tensor") + ggtitle("Num - Cat") gg3 = plotTensor(cboost, "gear_vs_tensor") + ggtitle("Cat - Cat") library(patchwork) gg1 | gg2 | gg3
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.