R/GbmExplainR.R

#' GbmExplainR: Decompose predictions from a gbm into feature contributions + 
#' bias.
#'
#' This package works with the gbm package and allows a prediction from a 
#' \code{gbm.object} to be decomposed into feature contributions + bias.
#' This is a useful tool to have in explaining why a particular observation
#' received the prediction it did, from the model.
#' 
#' Within a single tree, the contribution for a given 
#' node is calculated by subtracting the prediction for the current node from 
#' the prediction of the next node the observation would visit in the tree. 
#' The predicted value for the first node in the tree is combined into the 
#' bias term (which also includes the intercept or \code{initF} from the model).
#' Node contributions are summed by the split variable for the node, across all 
#' trees in the model, giving the observation's prediction represented as
#' bias + contribution for each feature used in the model.
#' 
#' @section References:
#' The method used is based off the Python package treeinterpreter, for random 
#' forests; \url{https://github.com/andosa/treeinterpreter}.  
#' There is a blog post on the package here; 
#' \url{http://blog.datadive.net/random-forest-interpretation-conditional-feature-contributions/}.
#' 
#' @docType package
#' @name GbmExplainR
NULL
richardangell/GbmExplainR documentation built on May 22, 2019, 12:54 p.m.