Description Usage Arguments Details References Examples
View source: R/main-compute_metrics.R
Computes various metrics that can be used to assess LIME explainer models for different LIME implementations.
1 | compute_metrics(explanations, metrics = "all")
|
explanations |
Explain dataframe from the list returned by apply_lime. |
metrics |
Vector specifying metrics to compute. Default is 'all'. See details for metrics available. |
The metrics available are listed below.
ave_r2
: Average explainer model R2 value computed over all explanations in the test set.
msee
: Mean square explanation error computed over all explanations in the test set.
ave_fidelity
: Average fidelity metric (Ribeiro et. al. 2016) computed over all explanations in the test set.
Ribeiro, M. T., S. Singh, and C. Guestrin, 2016: "why should I trust you?": Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, 1135–1144.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | # Prepare training and testing data
x_train = sine_data_train[c("x1", "x2", "x3")]
y_train = factor(sine_data_train$y)
x_test = sine_data_test[1:5, c("x1", "x2", "x3")]
# Fit a random forest model
rf <- randomForest::randomForest(x = x_train, y = y_train)
# Run apply_lime
res <- apply_lime(train = x_train,
test = x_test,
model = rf,
label = "1",
n_features = 2,
sim_method = c('quantile_bins',
'kernel_density'),
nbins = 2:3)
# Compute metrics to compare lime implementations
compute_metrics(res$explain)
# Return a table with only the MSEE values
compute_metrics(res$explain, metrics = "msee")
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.