Description Details References
This package works with the gbm package and allows a prediction from a
gbm.object
to be decomposed into feature contributions + bias.
This is a useful tool to have in explaining why a particular observation
received the prediction it did, from the model.
Within a single tree, the contribution for a given
node is calculated by subtracting the prediction for the current node from
the prediction of the next node the observation would visit in the tree.
The predicted value for the first node in the tree is combined into the
bias term (which also includes the intercept or initF
from the model).
Node contributions are summed by the split variable for the node, across all
trees in the model, giving the observation's prediction represented as
bias + contribution for each feature used in the model.
The method used is based off the Python package treeinterpreter, for random forests; https://github.com/andosa/treeinterpreter. There is a blog post on the package here; http://blog.datadive.net/random-forest-interpretation-conditional-feature-contributions/.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.