shapr-package | R Documentation |
Complex machine learning models are often hard to interpret. However, in many situations it is crucial to understand and explain why a model made a specific prediction. Shapley values is the only method for such prediction explanation framework with a solid theoretical foundation. Previously known methods for estimating the Shapley values do, however, assume feature independence. This package implements the method described in Aas, Jullum and Løland (2019) arXiv:1903.10464, which accounts for any feature dependence, and thereby produces more accurate estimates of the true Shapley values. An accompanying Python wrapper (shaprpy) is available on GitHub.
Maintainer: Martin Jullum Martin.Jullum@nr.no (ORCID)
Authors:
Nikolai Sellereite nikolaisellereite@gmail.com (ORCID)
Lars Henry Berge Olsen lholsen@math.uio.no (ORCID)
Annabelle Redelmeier Annabelle.Redelmeier@nr.no
Jon Lachmann Jon@lachmann.nu
Other contributors:
Anders Løland Anders.Loland@nr.no [contributor]
Jens Christian Wahl Jens.Christian.Wahl@nr.no [contributor]
Camilla Lingjærde [contributor]
Norsk Regnesentral [copyright holder, funder]
Useful links:
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.