devtools::load_all()

Local Model-Agnostic Methods {#local-methods}

Local interpretation methods explain individual predictions. In this chapter, you will learn about the following local explanation methods:

LIME and Shapley values are attribution methods, so that the prediction of a single instance is described as the sum of feature effects. Other methods, such as counterfactual explanations, are example-based.



christophM/interpretable-ml-book documentation built on March 10, 2024, 10:34 a.m.