iml: Interpretable Machine Learning
Version 0.7.0

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , accumulated local effects plots described by Apley (2018) , partial dependence plots described by Friedman (2001) , individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) , feature interactions described by Friedman et. al and tree surrogate models.

Package details

AuthorChristoph Molnar [aut, cre]
Date of publication2018-09-11 15:20:02 UTC
MaintainerChristoph Molnar <[email protected]>
LicenseMIT + file LICENSE
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:

Try the iml package in your browser

Any scripts or data that you put into this service are public.

iml documentation built on Sept. 11, 2018, 5:06 p.m.