h2o.shap_explain_row_plot | R Documentation |
SHAP explanation shows contribution of features for a given instance. The sum of the feature contributions and the bias term is equal to the raw prediction of the model, i.e., prediction before applying inverse link function. H2O implements TreeSHAP which when the features are correlated, can increase contribution of a feature that had no influence on the prediction.
h2o.shap_explain_row_plot(
model,
newdata,
row_index,
columns = NULL,
top_n_features = 10,
plot_type = c("barplot", "breakdown"),
contribution_type = c("both", "positive", "negative"),
background_frame = NULL
)
model |
An H2O tree-based model. This includes Random Forest, GBM and XGboost only. Must be a binary classification or regression model. |
newdata |
An H2O Frame, used to determine feature contributions. |
row_index |
Instance row index. |
columns |
List of columns or list of indices of columns to show.
If specified, then the |
top_n_features |
Integer specifying the maximum number of columns to show (ranked by their contributions).
When |
plot_type |
Either "barplot" or "breakdown". Defaults to "barplot". |
contribution_type |
When |
background_frame |
Optional frame, that is used as the source of baselines for the marginal SHAP. |
A ggplot2 object.
## Not run:
library(h2o)
h2o.init()
# Import the wine dataset into H2O:
f <- "https://h2o-public-test-data.s3.amazonaws.com/smalldata/wine/winequality-redwhite-no-BOM.csv"
df <- h2o.importFile(f)
# Set the response
response <- "quality"
# Split the dataset into a train and test set:
splits <- h2o.splitFrame(df, ratios = 0.8, seed = 1)
train <- splits[[1]]
test <- splits[[2]]
# Build and train the model:
gbm <- h2o.gbm(y = response,
training_frame = train)
# Create the SHAP row explanation plot
shap_explain_row_plot <- h2o.shap_explain_row_plot(gbm, test, row_index = 1)
print(shap_explain_row_plot)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.