explain | R Documentation |
Compute fast (approximate) Shapley values for a set of features using the Monte Carlo algorithm described in Strumbelj and Igor (2014). An efficient algorithm for tree-based models, commonly referred to as Tree SHAP, is also supported for lightgbm and xgboost models; see Lundberg et. al. (2020) for details.
explain(object, ...)
## Default S3 method:
explain(
object,
feature_names = NULL,
X = NULL,
nsim = 1,
pred_wrapper = NULL,
newdata = NULL,
adjust = FALSE,
baseline = NULL,
shap_only = TRUE,
parallel = FALSE,
...
)
## S3 method for class 'lm'
explain(
object,
feature_names = NULL,
X,
nsim = 1,
pred_wrapper,
newdata = NULL,
adjust = FALSE,
exact = FALSE,
baseline = NULL,
shap_only = TRUE,
parallel = FALSE,
...
)
## S3 method for class 'xgb.Booster'
explain(
object,
feature_names = NULL,
X = NULL,
nsim = 1,
pred_wrapper,
newdata = NULL,
adjust = FALSE,
exact = FALSE,
baseline = NULL,
shap_only = TRUE,
parallel = FALSE,
...
)
## S3 method for class 'lgb.Booster'
explain(
object,
feature_names = NULL,
X = NULL,
nsim = 1,
pred_wrapper,
newdata = NULL,
adjust = FALSE,
exact = FALSE,
baseline = NULL,
shap_only = TRUE,
parallel = FALSE,
...
)
object |
A fitted model object (e.g., a |
... |
Additional optional arguments to be passed on to
|
feature_names |
Character string giving the names of the predictor
variables (i.e., features) of interest. If |
X |
A matrix-like R object (e.g., a data frame or matrix) containing
ONLY the feature columns from the training data (or suitable background data
set). NOTE: This argument is required whenever |
nsim |
The number of Monte Carlo repetitions to use for estimating each
Shapley value (only used when |
pred_wrapper |
Prediction function that requires two arguments,
|
newdata |
A matrix-like R object (e.g., a data frame or matrix)
containing ONLY the feature columns for the observation(s) of interest; that
is, the observation(s) you want to compute explanations for. Default is
|
adjust |
Logical indicating whether or not to adjust the sum of the
estimated Shapley values to satisfy the local accuracy property; that is,
to equal the difference between the model's prediction for that sample and
the average prediction over all the training data (i.e., |
baseline |
Numeric baseline to use when adjusting the computed Shapley
values to achieve local accuracy. Adjusted Shapley values for a single
prediction ( |
shap_only |
Logical indicating whether or not to include additional
output useful for plotting (i.e., |
parallel |
Logical indicating whether or not to compute the approximate
Shapley values in parallel across features; default is |
exact |
Logical indicating whether to compute exact Shapley values.
Currently only available for |
If shap_only = TRUE
(the default), a matrix is returned with one
column for each feature specified in feature_names
(if
feature_names = NULL
, the default, there will
be one column for each feature in X
) and one row for each observation
in newdata
(if newdata = NULL
, the default, there will be one
row for each observation in X
). Additionally, the returned matrix will
have an attribute called "baseline"
containing the baseline value. If
shap_only = FALSE
, then a list is returned with three components:
shapley_values
- a matrix of Shapley values (as described above);
feature_values
- the corresponding feature values (for plotting with
shapviz::shapviz()
);
baseline
- the corresponding baseline value (for plotting with
shapviz::shapviz()
).
Setting exact = TRUE
with a linear model (i.e., an
stats::lm()
or stats::glm()
object) assumes that the
input features are independent. Also, setting adjust = TRUE
is
experimental and we follow the same approach as in
shap.
Strumbelj, E., and Igor K. (2014). Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems, 41(3), 647-665.
Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., and Lee, Su-In (2020). From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence, 2(1), 2522–5839.
You can find more examples (with larger and more realistic data sets) on the fastshap GitHub repository: https://github.com/bgreenwell/fastshap.
#
# A projection pursuit regression (PPR) example
#
# Load the sample data; see ?datasets::mtcars for details
data(mtcars)
# Fit a projection pursuit regression model
fit <- ppr(mpg ~ ., data = mtcars, nterms = 5)
# Prediction wrapper
pfun <- function(object, newdata) { # needs to return a numeric vector
predict(object, newdata = newdata)
}
# Compute approximate Shapley values using 10 Monte Carlo simulations
set.seed(101) # for reproducibility
shap <- explain(fit, X = subset(mtcars, select = -mpg), nsim = 10,
pred_wrapper = pfun)
head(shap)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.