ipmrf: IPM casewise with CART-RF by 'randomForest' for OOB samples

View source: R/ipmrf.R

ipmrfR Documentation

IPM casewise with CART-RF by randomForest for OOB samples

Description

The IPM for a case in the training set is calculated by considering and averaging over only the trees where the case belongs to the OOB set. The case is put down each of the trees where the case belongs to the OOB set. For each tree, the case goes from the root node to a leaf through a series of nodes. The variable split in these nodes is recorded. The percentage of times a variable is selected along the case's way from the root to the terminal node is calculated for each tree. Note that we do not count the percentage of times a split occurred on variable $k$ in tree $t$, but only the variables that intervened in the prediction of the case. The IPM for this case is obtained by averaging those percentages over only the trees where the case belongs to the OOB set.

Usage

ipmrf(marbolr, da, ntree)

Arguments

marbolr

Random forest obtained with randomForest. Responses can be of the same type supported by randomForest. Note that not only numerical or nominal, but also ordered responses, censored response variables and multivariate responses can be considered with ipmparty.

da

Data frame with the predictors only, not responses, of the training set used for computing marbolr. Each row corresponds to an observation and each column corresponds to a predictor. Predictors can be numeric, nominal or an ordered factor.

ntree

Number of trees in the random forest.

Details

The random forest is based on CART.

All details are given in Epifanio (2017).

Value

It returns IPM for cases in the training set. It is estimated when they are OOB observations. It is a matrix with as many rows as cases are in da, and as many columns as predictors are in da.

Note

See Epifanio (2017) about the parameters of RFs to be used, the advantages and limitations of IPM, and in particular when CART is considered with predictors of different types.

Author(s)

Irene Epifanio

References

Pierola, A. and Epifanio, I. and Alemany, S. (2016) An ensemble of ordered logistic regression and random forest for child garment size matching. Computers & Industrial Engineering, 101, 455–465.

Epifanio, I. (2017) Intervention in prediction measure: a new approach to assessing variable importance for random forests. BMC Bioinformatics, 18, 230.

See Also

ipmparty, ipmranger, ipmpartynew, ipmrfnew, ipmrangernew, ipmgbmnew

Examples


## Not run: 
    #Note: more examples can be found at
    #https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-017-1650-8
    
    if (require("mlbench")) {
        #data used by Breiman, L.: Random forests. Machine Learning 45(1), 5--32 (2001)
        data(PimaIndiansDiabetes2)
        Diabetes <- na.omit(PimaIndiansDiabetes2)
        
        set.seed(2016)
        require(randomForest)
        ri <-
            randomForest(
                diabetes  ~ .,
                data = Diabetes,
                ntree = 500,
                importance = TRUE,
                keep.inbag = TRUE,
                replace = FALSE
            )
        
        #GVIM and PVIM (CART-RF)
        im = importance(ri)
        im
        #rank
        ii = apply(im, 2, rank)
        ii
        
        #IPM based on CART-RF (randomForest package)
        da = Diabetes[, 1:8]
        ntree = 500
        #IPM case-wise computed with OOB
        pupf = ipmrf(ri, da, ntree)
        
        #global IPM
        pua = apply(pupf, 2, mean)
        pua
        
        #IPM by classes
        attach(Diabetes)
        puac = matrix(0, nrow = 2, ncol = dim(da)[2])
        puac[1, ] = apply(pupf[diabetes == 'neg', ], 2, mean)
        puac[2, ] = apply(pupf[diabetes == 'pos', ], 2, mean)
        colnames(puac) = colnames(da)
        rownames(puac) = c('neg', 'pos')
        puac
        
        #rank IPM
        #global rank
        rank(pua)
        #rank by class
        apply(puac, 1, rank)
    }

## End(Not run)

aleixalcacer/IPMRF documentation built on April 23, 2022, 3:50 a.m.