compute_metrics: Compute assessment metrics for LIME implementations

Description Usage Arguments Details References Examples

View source: R/main-compute_metrics.R

Description

Computes various metrics that can be used to assess LIME explainer models for different LIME implementations.

Usage

1
compute_metrics(explanations, metrics = "all")

Arguments

explanations

Explain dataframe from the list returned by apply_lime.

metrics

Vector specifying metrics to compute. Default is 'all'. See details for metrics available.

Details

The metrics available are listed below.

References

Ribeiro, M. T., S. Singh, and C. Guestrin, 2016: "why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, 1135–1144.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Prepare training and testing data
x_train = sine_data_train[c("x1", "x2", "x3")]
y_train = factor(sine_data_train$y)
x_test = sine_data_test[1:5, c("x1", "x2", "x3")]

# Fit a random forest model
rf <- randomForest::randomForest(x = x_train, y = y_train) 

# Run apply_lime
res <- apply_lime(train = x_train, 
                  test = x_test, 
                  model = rf,
                  label = "1",
                  n_features = 2,
                  sim_method = c('quantile_bins',
                                 'kernel_density'),
                  nbins = 2:3)
                  
# Compute metrics to compare lime implementations
compute_metrics(res$explain)

# Return a table with only the MSEE values
compute_metrics(res$explain, metrics = "msee")

goodekat/limeaid documentation built on March 26, 2021, 10:45 p.m.