xgrove | R Documentation |
Compute surrogate groves to explain predictive machine learning model and analyze complexity vs. explanatory power.
xgrove(
model,
data,
ntrees = c(4, 8, 16, 32, 64, 128),
pfun = NULL,
remove.target = T,
shrink = 1,
b.frac = 1,
seed = 42,
...
)
model |
A model with corresponding predict function that returns numeric values. |
data |
Training data. |
ntrees |
Sequence of integers: number of boosting trees for rule extraction. |
pfun |
Optional predict function |
remove.target |
Logical. If |
shrink |
Sets the |
b.frac |
Sets the |
seed |
Seed for the random number generator to ensure reproducible results (e.g. for the default |
... |
Further arguments to be passed to |
A surrogate grove is trained via gradient boosting using gbm
on data
with the predictions of using of the model
as target variable.
Note that data
must not contain the original target variable! The boosting model is trained using stumps of depth 1.
The resulting interpretation is extracted from pretty.gbm.tree
.
The column upper_bound_left
of the rules
and the groves
value of the output object contains
the split point for numeric variables denoting the uppoer bound of the left branch. Correspondingly, the
levels_left
column contains the levels of factor variables assigned to the left branch.
The rule weights of the branches are given in the rightmost columns. The prediction of the grove is
obtained as the sum of the assigned weights over all rows.
Note that the training data must not contain the target variable. It can be either removed manually or will be removed automatically from data
if the argument remove.target == TRUE
.
List of the results:
explanation |
Matrix containing tree sizes, rules, explainability |
rules |
Summary of the explanation grove: Rules with identical splits are aggegated. For numeric variables any splits are merged if they lead to identical parititions of the training data. |
groves |
Rules of the explanation grove. |
model |
|
Szepannek, G. and von Holt, B.H. (2023): Can’t see the forest for the trees – analyzing groves to explain random forests, Behaviormetrika, DOI: 10.1007/s41237-023-00205-2.
Szepannek, G. and Luebke, K.(2023): How much do we see? On the explainability of partial dependence plots for credit risk scoring, Argumenta Oeconomica 50, DOI: 10.15611/aoe.2023.1.07.
library(randomForest)
library(pdp)
data(boston)
set.seed(42)
rf <- randomForest(cmedv ~ ., data = boston)
data <- boston[,-3] # remove target variable
ntrees <- c(4,8,16,32,64,128)
xg <- xgrove(rf, data, ntrees)
xg
plot(xg)
# Example of a classification problem using the iris data.
# A predict function has to be defined, here for the posterior probabilities of the class Virginica.
data(iris)
set.seed(42)
rf <- randomForest(Species ~ ., data = iris)
data <- iris[,-5] # remove target variable
pf <- function(model, data){
predict(model, data, type = "prob")[,3]
}
xgrove(rf, data, pfun = pf)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.