multifor: Construct a random forest prediction rule and calculate...

View source: R/multifor.R

multiforR Documentation

Construct a random forest prediction rule and calculate class-focused and discriminatory variable importance scores.

Description

Constructs a random forest for multi-class outcomes and calculates the class-focused variable importance measure (VIM) and the discriminatory VIM.
The class-focused VIM ranks the covariates with respect to their ability to distinguish individual outcome classes from all others, which can be important in multi-class prediction tasks (see "Details" below). The discriminatory VIM, in contrast, similarly to conventional VIMs, measures the overall influence of covariates on classification performance, regardless of their relevance to individual classes.

Usage

multifor(
  formula = NULL,
  data = NULL,
  num.trees = ifelse(nrow(data) <= 5000, 5000, 1000),
  importance = "both",
  write.forest = TRUE,
  probability = TRUE,
  min.node.size = NULL,
  max.depth = NULL,
  replace = FALSE,
  sample.fraction = ifelse(replace, 1, 0.7),
  case.weights = NULL,
  keep.inbag = FALSE,
  inbag = NULL,
  holdout = FALSE,
  oob.error = TRUE,
  num.threads = NULL,
  verbose = TRUE,
  seed = NULL,
  dependent.variable.name = NULL,
  mtry = NULL,
  npervar = 5
)

Arguments

formula

Object of class formula or character describing the model to fit. Interaction terms supported only for numerical variables.

data

Training data of class data.frame, or matrix, dgCMatrix (Matrix).

num.trees

Number of trees. Default is 5000 for datasets with a maximum of 5000 observations and 1000 for datasets with more than 5000 observations.

importance

Variable importance mode, one of the following: "both" (the default), "class-focused", "discriminatory", "none". If "class-focused", class-focused VIM values are computed, if "discriminatory", discriminatory VIM values are computed, and if "both", both class-focused and discriminatory VIM values are computed. See the 'Details' section below for details.

write.forest

Save multifor.forest object, required for prediction. Set to FALSE to reduce memory usage if no prediction intended.

probability

Grow a probability forest as in Malley et al. (2012). Using this option (default is TRUE), class probability predictions are obtained.

min.node.size

Minimal node size. Default 5 for probability and 1 for classification.

max.depth

Maximal tree depth. A value of NULL or 0 (the default) corresponds to unlimited depth, 1 to tree stumps (1 split per tree).

replace

Sample with replacement. Default is FALSE.

sample.fraction

Fraction of observations to sample. Default is 1 for sampling with replacement and 0.7 for sampling without replacement. This can be a vector of class-specific values.

case.weights

Weights for sampling of training observations. Observations with larger weights will be selected with higher probability in the bootstrap (or subsampled) samples for the trees.

keep.inbag

Save how often observations are in-bag in each tree.

inbag

Manually set observations per tree. List of size num.trees, containing inbag counts for each observation. Can be used for stratified sampling.

holdout

Hold-out mode. Hold-out all samples with case weight 0 and use these for variable importance and prediction error.

oob.error

Compute OOB prediction error. Default is TRUE.

num.threads

Number of threads. Default is number of CPUs available.

verbose

Show computation status and estimated runtime.

seed

Random seed. Default is NULL, which generates the seed from R. Set to 0 to ignore the R seed.

dependent.variable.name

Name of outcome variable, needed if no formula given.

mtry

Number of candidate variables to sample for each split. Default is the (rounded down) square root of the number variables.

npervar

Number of splits to sample per candidate variable. Default is 5.

Details

Covariates targeted by the class-focused VIM, which specifically help distinguish individual outcome classes from the others are hereafter referred to as "class-related covariates". The primary motivation for identifying class-related covariates is frequently the interpretation of covariate effects.
Potential example applications include cancer subtyping (identifying biomarkers predictive of specific subtypes, e.g., luminal A, HER2-positive, rather than broad groups, e.g., hormone-driven vs. non-hormone-driven cancers), voting studies (covariates specifically associated with support for individual parties rather than general ideological orientation), and forensic science (detecting covariates specific to crime types like burglary or cybercrime rather than broadly violent vs. non-violent offenses).
In contrast to the class-focused VIM, conventional VIMs, such as the permutation VIM or the Gini importance, and the discriminatory VIM measure the overall influence of variables regardless of their class-relatedness Therefore, these measures rank not only class-related variables high, but also variables that only discriminate well between groups of classes. This is problematic, if only class-related variables are to be identified.
NOTE: To learn about the shapes of the influences of the variables with the largest class-focused VIM values on the multi-class outcome, it is crucial to apply the plot.multifor function to the multifor object. Two further related plot functions are plotMcl and plotVar.
NOTE ALSO: This methodology is based on work currently under peer review. A reference will be added once the corresponding paper is published.

The class-focused VIM requires that all variables are ordered. For this reason, before constructing the random forest, the categories of unordered categorical variables are ordered using an approach by Coppersmith et al. (1999), which ensures that close categories feature similar outcome class distributions. This approach is also used in the ranger R package, when using the option respect.unordered.factors="order".

Value

Object of class multifor with elements

predictions

Predicted classes (for probability=FALSE) or class probabilities (for probability=TRUE), based on out-of-bag samples.

num.trees

Number of trees.

num.independent.variables

Number of independent variables.

min.node.size

Value of minimal node size used.

mtry

Number of candidate variables sampled for each split.

class_foc_vim

class-focused VIM values. Only computed for independent variables that feature at least as many unique values as the outcome variable has classes. For other variables, the entries in the vector var.imp.multiclass will be NA.

discr_vim

Discriminatory VIM values for all independent variables.

prediction.error

Overall out-of-bag prediction error. For classification this is the fraction of missclassified samples and for probability estimation the Brier score.

confusion.matrix

Contingency table for classes and predictions based on out-of-bag samples (classification only).

forest

Saved forest (If write.forest set to TRUE). Note that the variable IDs in the split.varIDs object do not necessarily represent the column number in R.

treetype

Type of forest/tree. Classification or probability.

call

Function call.

importance.mode

Importance mode used.

num.samples

Number of samples.

replace

Sample with replacement.

plotres

List ob objects needed by the plot functions: data contains the data; yvarname is the name of the outcome variable.

Author(s)

Roman Hornung, Marvin N. Wright

References

  • Hornung, R. (2022). Diversity forests: Using split sampling to enable innovative complex split procedures in random forests. SN Computer Science 3(2):1, <\Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/s42979-021-00920-1")}>.

  • Wright, M. N., Ziegler, A. (2017). ranger: A fast implementation of random forests for high dimensional data in C++ and R. Journal of Statistical Software 77:1-17, <\Sexpr[results=rd]{tools:::Rd_expr_doi("10.18637/jss.v077.i01")}>.

  • Breiman, L. (2001). Random forests. Machine Learning 45:5-32, <\Sexpr[results=rd]{tools:::Rd_expr_doi("10.1023/A:1010933404324")}>.

  • Malley, J. D., Kruppa, J., Dasgupta, A., Malley, K. G., & Ziegler, A. (2012). Probability machines: consistent probability estimation using nonparametric learning machines. Methods of Information in Medicine 51:74-81, <\Sexpr[results=rd]{tools:::Rd_expr_doi("10.3414/ME00-01-0052")}>.

  • Coppersmith, D., Hong, S. J., Hosking, J. R. (1999). Partitioning nominal attributes in decision trees. Data Mining and Knowledge Discovery 3:197-217, <\Sexpr[results=rd]{tools:::Rd_expr_doi("10.1023/A:1009869804967")}>.

See Also

predict.multifor

Examples

## Not run: 

## Load package:

library("diversityForest")



## Set seed to make results reproducible:

set.seed(1234)



## Load the "ctg" data set:

data(ctg)



## Construct a random forest:

model <- multifor(dependent.variable.name = "CLASS", data = ctg, 
                  num.trees = 20)

# NOTE: num.trees = 20 (in the above) would be much too small for practical 
# purposes. This small number of trees was simply used to keep the
# runtime of the example short.
# The default number of trees is num.trees = 5000 for datasets with a maximum of
# 5000 observations and num.trees = 1000 for datasets larger than that.



## The out-of-bag estimated Brier score (note that by default
## 'probability = TRUE' is used in 'multifor'):

model$prediction.error



## Inspect the class-focused and the discriminatory VIM values:

model$class_foc_vim

# --> Note that there are no class-focused VIM values for some of the variables.
# These are those for which there are fewer unique values than outcome classes.
# See the "Details" section above.

model$discr_vim


## Inspect the 5 variables with the largest class-focused VIM values and the
## 5 variables with the largest discriminatory VIM values:

sort(model$class_foc_vim, decreasing = TRUE)[1:5]

sort(model$discr_vim, decreasing = TRUE)[1:5]



## Instead of passing the name of the outcome variable through the 
## 'dependent.variable.name' argument as above, the formula interface can also 
## be used. Below, we fit a random forest with only the first five variables 
## from the 'ctg' data set:

model <- multifor(CLASS ~ b + e + LBE + LB + AC, data=ctg, num.trees = 20)


## As expected, the out-of-bag estimated prediction error is much larger
## for this model:

model$prediction.error



## NOTE: Visual exploration of the results of the class-focused VIM analysis
## is crucial.
## Therefore, in practice the next step would be to apply the
## 'plot.multifor' function to the object 'model'.

# plot(model)





## Prediction:


# Separate 'ctg' data set randomly in training
# and test data:

data(ctg)
train.idx <- sample(nrow(ctg), 2/3 * nrow(ctg))
ctg.train <- ctg[train.idx, ]
ctg.test <- ctg[-train.idx, ]

# Construct random forest on training data:
# NOTE again: num.trees = 20 is specified too small for practical purposes.
model_train <- multifor(dependent.variable.name = "CLASS", data = ctg.train, 
                        importance = "none", probability = FALSE, 
                        num.trees = 20)
# NOTE: Because we are only interested in prediction here, we do not
# calculate VIM values (by setting importance = "none"), because this
# speeds up calculations.
# NOTE also: Because we are interested in class label prediction here
# rather than class probability prediction we specified 'probability = FALSE'
# above.

# Predict class values of the test data:
pred.ctg <- predict(model_train, data = ctg.test)

# Compare predicted and true class values of the test data:
table(ctg.test$CLASS, pred.ctg$predictions)



## Repeat the analysis for class probability prediction
## (default 'probability = TRUE'):

model_train <- multifor(dependent.variable.name = "CLASS", data = ctg.train, 
                        importance = "none", num.trees = 20)

# Predict class probabilities in the test data:
pred.ctg <- predict(model_train, data = ctg.test)

# The predictions are now a matrix of class probabilities:
head(pred.ctg$predictions)

# Obtain class predictions by choosing the classes with the maximum predicted
# probabilities (the function 'which.is.max' chooses one class randomly if
# there are several classes with maximum probability):
library("nnet")
classes <- levels(ctg.train$CLASS)
pred_classes <- factor(classes[apply(pred.ctg$predictions, 1, which.is.max)], 
                       levels=classes)

# Compare predicted and true class values of the test data:
table(ctg.test$CLASS, pred_classes)


## End(Not run)


diversityForest documentation built on June 8, 2025, 1:23 p.m.