Nothing
When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) <arXiv:1602.04938>.
Package details |
|
---|---|
Author | Emil Hvitfeldt [aut, cre] (<https://orcid.org/0000-0002-0679-1945>), Thomas Lin Pedersen [aut] (<https://orcid.org/0000-0002-5147-4711>), Michaël Benesty [aut] |
Maintainer | Emil Hvitfeldt <emilhhvitfeldt@gmail.com> |
License | MIT + file LICENSE |
Version | 0.5.3 |
URL | https://lime.data-imaginist.com https://github.com/thomasp85/lime |
Package repository | View on CRAN |
Installation |
Install the latest version of this package by entering the following in R:
|
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.