mvtb: Fitting a Multivariate Tree Boosting Model

Description Usage Arguments Details Value References See Also Examples

View source: R/mvtb.R

Description

Builds on gbm (Ridgeway 2013; Friedman, 2001) to fit a univariate tree model for each outcome, selecting predictors at each iteration that explain (co)variance in the outcomes. The number of trees included in the model can be chosen by minimizing the multivariate mean squared error using cross validation or a test set.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
mvtb(Y, X, 
     n.trees = 100,
     shrinkage = 0.01, 
     interaction.depth = 1,
     distribution="gaussian",
     train.fraction = 1, 
     bag.fraction = 1, 
     cv.folds = 1, 
     keep.data = FALSE,
     s = NULL, 
     compress = FALSE, 
     save.cv = FALSE,
     iter.details = TRUE,
     verbose=FALSE,
     mc.cores = 1, ...)

Arguments

Y

vector, matrix, or data.frame for outcome variables with no missing values. To easily compare influences across outcomes and for numerical stability, outcome variables should be scaled to have unit variance.

X

vector, matrix, or data.frame of predictors. For best performance, continuous predictors should be scaled to have unit variance. Categorical variables should converted to factors.

n.trees

maximum number of trees to be included in the model. Each individual tree is grown until a minimum number observations in each node is reached.

shrinkage

a constant multiplier for the predictions from each tree to ensure a slow learning rate. Default is .01. Small shrinkage values may require a large number of trees to provide adequate fit.

interaction.depth

fixed depth of trees to be included in the model. A tree depth of 1 corresponds to fitting stumps (main effects only), higher tree depths capture higher order interactions (e.g. 2 implies a model with up to 2-way interactions)

distribution

Character vector specifying the distribution of all outcomes. Default is "gaussian" see ?gbm for further details.

train.fraction

proportion of the sample used for training the multivariate additive model. If both cv.folds and train.fraction are specified, the CV is carried out within the training set.

bag.fraction

proportion of the training sample used to fit univariate trees for each response at each iteration. Default: 1

cv.folds

number of cross validation folds. Default: 1. Runs k + 1 models, where the k models are run in parallel and the final model is run on the entire sample. If larger than 1, the number of trees that minimize the multivariate MSE averaged over k-folds is reported in object$best.trees

keep.data

a logical variable indicating whether to keep the data stored with the object.

s

vector of indices denoting observations to be used for the training sample. If s is given, train.fraction is ignored.

compress

TRUE/FALSE. Compress output results list using bzip2 (approx 10% of original size). Default is FALSE.

save.cv

TRUE/FALSE. Save all k-fold cross-validation models. Default is FALSE.

iter.details

TRUE/FALSE. Return training, test, and cross-validation error at each iteration. Default is FALSE.

verbose

If TRUE, will print out progress and performance indicators for each model. Default is FALSE.

mc.cores

Number of cores for cross validation.

...

additional arguments passed to gbm. These include distribution, weights, var.monotone, n.minobsinnode, keep.data, verbose, class.stratify.cv. Note that other distribution arguments have not been tested.

Details

This function selects predictors that explain covariance in multivariate outcomes. This is done efficiently by fitting separate gbm models for each outcome (contained in $models).

(Relative) influences can be retrieved using summary or mvtb.ri, which are the usual reductions in SSE due to splitting on each predictor. The covariance explained in pairs of outcomes by each predictor can be computed using mvtb.covex. Partial dependence plots can be obtained from mvtb.plot.

The model is tuned by selecting the number of trees that minimize the mean squared error in a test set for each outcome (by setting train.fraction) or averaged over k folds in k-fold cross-validation (by setting cv.folds > 1). The best number of trees is available via $best.trees. If both cv.folds and train.fraction is specified, cross-validation is carried out within the training set. If s is specified, train.fraction is ignored but cross-validation will be carried out for observations in s.

Cross-validation models are usually discarded but can be saved by setting save.cv = TRUE. CV models can be accessed from $ocv of the output object. Observations can be specifically set for inclusion in the training set by passing a vector of integers indexing the rows to include to s. Multivariate mean squared training, test, and cv error are available from $train.error, $test.error, $cverr from the output object when iter.details = TRUE.

Since the output objects can be large, automatic compression is available by setting compress=TRUE. All methods that use the mvtb object automatically uncompress this object if necessary. The function mvtb.uncomp is available to manually decompress the object.

Note that trees are grown until a minimum number of observations in each node is reached. If the number of training samples*bag.fraction is less the minimum number of observations, (which can occur with small data sets), this will cause an error. Adjust the n.minobsinnode, train.fraction, or bag.fraction.

Cross-validation can be parallelized by setting mc.cores > 1. Parallel cross-validation is carried out using parallel::mclapply, which makes mc.cores copies of the original environment. For models with many trees (> 100K), memory limits can be reached rapidly. mc.cores will not work on Windows.

Value

Fitted model. This is a list containing the following elements:

References

Miller P.J., Lubke G.H, McArtor D.B., Bergeman C.S. (Accepted) Finding structure in data with multivariate tree boosting.

Ridgeway, G., Southworth, M. H., & RUnit, S. (2013). Package 'gbm'. Viitattu, 10, 2013.

Elith, J., Leathwick, J. R., & Hastie, T. (2008). A working guide to boosted regression trees. Journal of Animal Ecology, 77(4), 802-813.

Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics, 1189-1232.

See Also

summary.mvtb, predict.mvtb

mvtb.covex to estimate the covariance explained in pairs of outcomes by predictors

mvtb.nonlin to help detect nonlinear effects or interactions

plot.mvtb, mvtb.perspec for partial dependence plots

mvtb.uncomp to uncompress a compressed output object

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
data(wellbeing)
Y <- wellbeing[,21:26]
X <- wellbeing[,1:20]
Ys <- scale(Y)
cont.id <- unlist(lapply(X,is.numeric))
Xs <- scale(X[,cont.id])

## Fit the model
res <- mvtb(Y=Ys,X=Xs)

## Interpret the model
summary(res)
covex <- mvtb.covex(res, Y=Ys, X=Xs)
plot(res,predictor.no = 8)
predict(res,newdata=Xs)
mvtb.cluster(covex)
mvtb.heat(t(mvtb.ri(res)),cexRow=.8,cexCol=1,dec=0)

patr1ckm/mvtboost documentation built on May 24, 2019, 8:21 p.m.