devtools::load_all()
Local interpretation methods explain individual predictions. In this chapter, you will learn about the following local explanation methods:
LIME and Shapley values are attribution methods, so that the prediction of a single instance is described as the sum of feature effects. Other methods, such as counterfactual explanations, are example-based.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.