knitr::opts_chunk$set( collapse = TRUE, comment = "#>", out.width = "80%" )

knitr::opts_chunk$set(include = FALSE) library(PPtreeregViz) library(ggplot2) library(dplyr)

This package was developed to visualize the Projection Pursuit Regression Tree model and add explanatory possibilities of the model using \code{XAI (eXplainable AI)} techniques. Since projection pursuit regression tree is based on tree method and grows using projection of input features, the model has excellent interpretability itself. By visualizing each node of this model, global analysis of the model is possible. (This method is model-specific because it can only be used in the \code{PPTreereg} model.) Global interpretation using this method is possible, but it is difficult to interpret one observation because it goes through several projections. To overcome this, the developed \code{XAI} techniques were slightly modified to fit the structure of \code{PPTreereg} model. Using these visualization methods, it is possible to figure out how and what features have affected the model’s prediction. Through these processes, we can determine whether the model is trustworthy or not.

You can install the released version of \code{PPtreeregViz} from CRAN with:

devtools::install_github("PPtreeregViz")

And the development version from GitHub with:

# install.packages("devtools") devtools::install_github("sunsmiling/PPtreeregViz")

As an example, Boston house price data from the MASS library was used. In the first part, we will talk about visualizing model itself. Next, we will see an example of explaining model by applying \code{XAI} techniques.

The Boston data were divided into a train data set and a test data set at a ratio of 7:3. In particular, the first observation in the test data set was specifically selected as “sample_one”.

library(MASS) data("Boston") set.seed(1234) proportion = 0.7 idx_train = sample(1:nrow(Boston), size = round(proportion * nrow(Boston))) sample_train = Boston[idx_train, ] sample_test = Boston[-idx_train, ] sample_one <- sample_test[sample(1:nrow(sample_test),1),-14]

Create a \code{PPTreereg} model with Depth as 2 for ease of visualization and interpretation.

library(PPtreeregViz) Model <- PPtreeregViz::PPTreereg(medv ~., data = sample_train, DEPTH = 2)

```
plot(Model)
```

Through `pp_ggparty`

, marginal predicted values and actual values are drawn according to independent variables for each final node. In the group with the lower 25% of house prices, \code{lstat}(lower status of the population (percent)) had a wide range from 10 to 30, but in the group with the top 25%, \code{lstat} had only values less than 15.

pp_ggparty(Model, "lstat", final.rule = 1)

pp_ggparty(Model, "lstat", final.rule = 4)

pp_ggparty(Model, "lstat", final.rule = 5)

By using the combination of the regression coefficient values of the projection values at each split node, the importance of the variables for which the model was built can be calculated. `PPimportance`

calculate split node's coefficients and can be drawn for each final leaf. The blue bar represents the positive slope (effect), and the red bar represents the negative slope.

Variables are sorted according to the overall size of each bar, so you can know the variables that are considered important for each final node sequentially.

Tree.Imp <- PPimportance(Model) plot(Tree.Imp)

If you use some arguments such as `marginal = TRUE`

and `num_var`

, you can see the desired number of marginal variable importance of the whole rather than each final leaf.

plot(Tree.Imp, marginal = TRUE, num_var = 5)

`PPregNodeViz`

can visualize how train data is fitted for each node. When the node.id is 4 (i.e. first final node), the result of fitted data is displayed in black color. In order to improve accuracy, \code{PPTreereg} can choose the final rule from 1 to 5, whether to use a single value or a linear combination of independent variables.

PPregNodeViz(Model, node.id = 1)

PPregNodeViz(Model, node.id = 4)

4th final leaf's node id is 7.

PPregNodeViz(Model,node.id = 7)

Using `PPregvarViz`

shows results similar to partial dependent plots of how independent variable affects the prediction of Y in actual data. If the argument `Indiv=TRUE`

, the picture is drawn by dividing the grid for each final node.

PPregVarViz(Model,"lstat")

PPregVarViz(Model,"lstat",indiv = TRUE)

PPregVarViz(Model,"chas",var.factor = TRUE)

PPregVarViz(Model,"chas",indiv = TRUE, var.factor = TRUE)

So far, we have only seen the global movement of the model itself. From now on, we will proceed with model analysis using SHAP values. Using the SHAP value, you can see locally how one sample data moves in the model. In order to calculate the SHAP value more faster, the method for calculating the kernel shap of the \code{NorskRegnesentral/shapr} \url{https://github.com/NorskRegnesentral/shapr} package was slightly modified and used.

sample_one

Since the `empirical`

method, which is a more accurate calculation method, takes more time to calculate, a `simple`

calculation method, which is an estimate of this value, was used.

ppshapr.simple(PPTreeregOBJ = Model, testObs = sample_one, final.rule = 5)$dt

Although the difference in calculation speed between \code{ppshapr.simple} and \code{ppshapr.empirical} is quite large, it can be seen that the results are similar.

\code{PPTreereg} creates a tree based on the range of y values. Therefore, when calculating the contributions of features of one observation, it is natural that different values are calculated for each final leaf.
Compared with the data with y value in the lower 25% (first final leaf), the effect of \code{lstat} of [`sample_one`

] was very large. On the other hand, it can be seen that the influence of rm (average number of rooms per dwelling) is very large in data with upper 25% large y value (4th final leaf).
How each feature affects y hat in one observation can be drawn in two ways. `decisionplot`

and `waterfallplot`

.

decisionplot(Model, testObs = sample_one, method="simple",varImp = "shapImp",final.rule=5)

waterfallplot(Model, testObs = sample_one, method="simple", final.rule=5)

**Any scripts or data that you put into this service are public.**

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.