iml: Interpretable Machine Learning

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) <doi:10.48550/arxiv.1801.01489>, accumulated local effects plots described by Apley (2018) <doi:10.48550/arxiv.1612.08468>, partial dependence plots described by Friedman (2001) <www.jstor.org/stable/2699986>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) <doi:10.1080/10618600.2014.907095>, local models (variant of 'lime') described by Ribeiro et. al (2016) <doi:10.48550/arXiv.1602.04938>, the Shapley Value described by Strumbelj et. al (2014) <doi:10.1007/s10115-013-0679-x>, feature interactions described by Friedman et. al <doi:10.1214/07-AOAS148> and tree surrogate models.

Package details

AuthorGiuseppe Casalicchio [aut, cre], Christoph Molnar [aut], Patrick Schratz [aut] (<https://orcid.org/0000-0003-0748-6624>)
MaintainerGiuseppe Casalicchio <giuseppe.casalicchio@lmu.de>
LicenseMIT + file LICENSE
Version0.11.3
URL https://giuseppec.github.io/iml/ https://github.com/giuseppec/iml/
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("iml")

Try the iml package in your browser

Any scripts or data that you put into this service are public.

iml documentation built on May 29, 2024, 1:59 a.m.