lime-package | R Documentation |
When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) arXiv:1602.04938.
This package is a port of the original Python lime package implementing the
prediction explanation framework laid out Ribeiro et al. (2016). The
package supports models from caret
and mlr
natively, but see
the docs for how to make it work for any model.
Main functions:
Use of lime
is mainly through two functions. First you create an
explainer
object using the lime()
function based on the training data and
the model, and then you can use the explain()
function along with new data
and the explainer to create explanations for the model output.
Along with these two functions, lime
also provides the plot_features()
and plot_text_explanations()
function to visualise the explanations
directly.
Maintainer: Emil Hvitfeldt emilhhvitfeldt@gmail.com (ORCID)
Authors:
Thomas Lin Pedersen thomasp85@gmail.com (ORCID)
Michaël Benesty michael@benesty.fr
Ribeiro, M.T., Singh, S., Guestrin, C. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. 2016, https://arxiv.org/abs/1602.04938
Useful links:
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.