GbmExplainR: GbmExplainR: Decompose predictions from a gbm into feature...

Description Details References

Description

This package works with the gbm package and allows a prediction from a gbm.object to be decomposed into feature contributions + bias. This is a useful tool to have in explaining why a particular observation received the prediction it did, from the model.

Details

Within a single tree, the contribution for a given node is calculated by subtracting the prediction for the current node from the prediction of the next node the observation would visit in the tree. The predicted value for the first node in the tree is combined into the bias term (which also includes the intercept or initF from the model). Node contributions are summed by the split variable for the node, across all trees in the model, giving the observation's prediction represented as bias + contribution for each feature used in the model.

References

The method used is based off the Python package treeinterpreter, for random forests; https://github.com/andosa/treeinterpreter. There is a blog post on the package here; http://blog.datadive.net/random-forest-interpretation-conditional-feature-contributions/.


richardangell/GbmExplainR documentation built on May 22, 2019, 12:54 p.m.