Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) <arXiv:1801.01489>, accumulated local effects plots described by Apley (2018) <arXiv:1612.08468>, partial dependence plots described by Friedman (2001) <www.jstor.org/stable/2699986>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) <doi:10.1080/10618600.2014.907095>, local models (variant of 'lime') described by Ribeiro et. al (2016) <arXiv:1602.04938>, the Shapley Value described by Strumbelj et. al (2014) <doi:10.1007/s101150130679x>, feature interactions described by Friedman et. al <doi:10.1214/07AOAS148> and tree surrogate models.
Package details 


Author  Christoph Molnar [aut, cre], Patrick Schratz [aut] (<https://orcid.org/0000000307486624>) 
Maintainer  Christoph Molnar <christoph.molnar@gmail.com> 
License  MIT + file LICENSE 
Version  0.10.1 
URL  https://christophm.github.io/iml/ https://github.com/christophM/iml/ 
Package repository  View on CRAN 
Installation 
Install the latest version of this package by entering the following in R:

Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.