interpret: Fit Interpretable Machine Learning Models and Explain Blackbox Machine Learning

Package for training interpretable machine learning models and explaining blackbox systems. Historically, the most interpretable machine learning models were not very accurate, and the most accurate models were not very interpretable. Microsoft Research has developed an algorithm called the Explainable Boosting Machine (EBM) which has both high accuracy and interpretability. EBM uses machine learning techniques like bagging and boosting to breathe new life into traditional GAMs (Generalized Additive Models). This makes them as accurate as random forests and gradient boosted trees, and also enhances their intelligibility and editability. Details on the EBM algorithm can be found in the paper by Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad (2015, <doi:10.1145/2783258.2788613>).

Getting started

Package details

AuthorSamuel Jenkins [aut], Harsha Nori [aut], Paul Koch [aut], Rich Caruana [aut, cre], Microsoft Corporation [cph]
MaintainerRich Caruana <[email protected]>
LicenseMIT + file LICENSE
Version0.1.24
URL https://github.com/interpretml/interpret
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("interpret")

Try the interpret package in your browser

Any scripts or data that you put into this service are public.

interpret documentation built on Dec. 12, 2019, 5:20 p.m.