explain: Explain model predictions

View source: R/explain.R

explainR Documentation

Explain model predictions

Description

Once an explainer has been created using the lime() function it can be used to explain the result of the model on new observations. The explain() function takes new observation along with the explainer and returns a data.frame with prediction explanations, one observation per row. The returned explanations can then be visualised in a number of ways, e.g. with plot_features().

Usage

## S3 method for class 'data.frame'
explain(
  x,
  explainer,
  labels = NULL,
  n_labels = NULL,
  n_features,
  n_permutations = 5000,
  feature_select = "auto",
  dist_fun = "gower",
  kernel_width = NULL,
  gower_pow = 1,
  ...
)

## S3 method for class 'character'
explain(
  x,
  explainer,
  labels = NULL,
  n_labels = NULL,
  n_features,
  n_permutations = 5000,
  feature_select = "auto",
  single_explanation = FALSE,
  ...
)

explain(
  x,
  explainer,
  labels,
  n_labels = NULL,
  n_features,
  n_permutations = 5000,
  feature_select = "auto",
  ...
)

## S3 method for class 'imagefile'
explain(
  x,
  explainer,
  labels = NULL,
  n_labels = NULL,
  n_features,
  n_permutations = 1000,
  feature_select = "auto",
  n_superpixels = 50,
  weight = 20,
  n_iter = 10,
  p_remove = 0.5,
  batch_size = 10,
  background = "grey",
  ...
)

Arguments

x

New observations to explain, of the same format as used when creating the explainer

explainer

An explainer object to use for explaining the observations

labels

The specific labels (classes) to explain in case the model is a classifier. For classifiers either this or n_labels must be given.

n_labels

The number of labels to explain. If this is given for classifiers the top n_label classes will be explained.

n_features

The number of features to use for each explanation.

n_permutations

The number of permutations to use for each explanation.

feature_select

The algorithm to use for selecting features. One of:

  • "auto": If n_features <= 6 use "forward_selection" else use "highest_weights".

  • "none": Ignore n_features and use all features.

  • "forward_selection": Add one feature at a time until n_features is reached, based on quality of a ridge regression model.

  • "highest_weights": Fit a ridge regression and select the n_features with the highest absolute weight.

  • "lasso_path": Fit a lasso model and choose the n_features whose lars path converge to zero the latest.

  • "tree" : Fit a tree to select n_features (which needs to be a power of 2). It requires last version of XGBoost.

dist_fun

The distance function to use for calculating the distance from the observation to the permutations. If dist_fun = 'gower' (default) it will use gower::gower_dist(). Otherwise it will be forwarded to stats::dist()

kernel_width

The width of the exponential kernel that will be used to convert the distance to a similarity in case dist_fun != 'gower'.

gower_pow

A modifier for gower distance. The calculated distance will be raised to the power of this value.

...

Parameters passed on to the predict_model() method

single_explanation

A boolean indicating whether to pool all text in x into a single explanation.

n_superpixels

The number of segments an image should be split into

weight

How high should locality be weighted compared to colour. High values leads to more compact superpixels, while low values follow the image structure more

n_iter

How many iterations should the segmentation run for

p_remove

The probability that a superpixel will be removed in each permutation

batch_size

The number of explanations to handle at a time

background

The colour to use for blocked out superpixels

Value

A data.frame encoding the explanations one row per explained observation. The columns are:

  • model_type: The type of the model used for prediction.

  • case: The case being explained (the rowname in cases).

  • model_r2: The quality of the model used for the explanation

  • model_intercept: The intercept of the model used for the explanation

  • model_prediction: The prediction of the observation based on the model used for the explanation.

  • feature: The feature used for the explanation

  • feature_value: The value of the feature used

  • feature_weight: The weight of the feature in the explanation

  • feature_desc: A human readable description of the feature importance.

  • data: Original data being explained

  • prediction: The original prediction from the model

Furthermore classification explanations will also contain:

  • label: The label being explained

  • label_prob: The probability of label as predicted by model

Examples

# Explaining a model and an explainer for it
library(MASS)
iris_test <- iris[1, 1:4]
iris_train <- iris[-1, 1:4]
iris_lab <- iris[[5]][-1]
model <- lda(iris_train, iris_lab)
explanation <- lime(iris_train, model)

# This can now be used together with the explain method
explain(iris_test, explanation, n_labels = 1, n_features = 2)


lime documentation built on Aug. 19, 2022, 9:07 a.m.