lime: Local Interpretable Model-Agnostic Explanations

When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <arXiv:1602.04938>.

Package details

AuthorEmil Hvitfeldt [aut, cre] (<https://orcid.org/0000-0002-0679-1945>), Thomas Lin Pedersen [aut] (<https://orcid.org/0000-0002-5147-4711>), Michaël Benesty [aut]
MaintainerEmil Hvitfeldt <emilhhvitfeldt@gmail.com>
LicenseMIT + file LICENSE
Version0.5.3
URL https://lime.data-imaginist.com https://github.com/thomasp85/lime
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("lime")

Try the lime package in your browser

Any scripts or data that you put into this service are public.

lime documentation built on Aug. 19, 2022, 9:07 a.m.