lime-package: lime: Local Interpretable Model-Agnostic Explanations

lime-packageR Documentation

lime: Local Interpretable Model-Agnostic Explanations

Description

logo

When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) arXiv:1602.04938.

Details

This package is a port of the original Python lime package implementing the prediction explanation framework laid out Ribeiro et al. (2016). The package supports models from caret and mlr natively, but see the docs for how to make it work for any model.

Main functions:

Use of lime is mainly through two functions. First you create an explainer object using the lime() function based on the training data and the model, and then you can use the explain() function along with new data and the explainer to create explanations for the model output.

Along with these two functions, lime also provides the plot_features() and plot_text_explanations() function to visualise the explanations directly.

Author(s)

Maintainer: Emil Hvitfeldt emilhhvitfeldt@gmail.com (ORCID)

Authors:

References

Ribeiro, M.T., Singh, S., Guestrin, C. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. 2016, https://arxiv.org/abs/1602.04938

See Also

Useful links:


lime documentation built on Aug. 19, 2022, 9:07 a.m.