fs.stability: Classification & Feature Selection

Description Usage Arguments Value Author(s) References Examples

View source: R/fs.stability.v2.R

Description

Applies models to high-dimensional data to both classify and determine important features for classification. The function bootstraps a user-specified number of times to facilitate stability metrics of features selected thereby providing an important metric for biomarker investigations, namely whether the important variables can be identified if the models are refit on 'different' data.

Usage

1
2
3
4
5
6
7
fs.stability(X, Y, method, k = 10, p = 0.9, f = NULL,
  stability.metric = "jaccard", optimize = TRUE,
  optimize.resample = FALSE, tuning.grid = NULL, k.folds = if (optimize)
  10 else NULL, repeats = if (k.folds == "LOO") NULL else if (optimize) 3 else
  NULL, resolution = if (is.null(tuning.grid) && optimize) 3 else NULL,
  metric = "Accuracy", model.features = FALSE, allowParallel = FALSE,
  verbose = "none", ...)

Arguments

X

A scaled matrix or dataframe containing numeric values of each feature

Y

A factor vector containing group membership of samples

method

A vector listing models to be fit. Available options are "plsda" (Partial Least Squares Discriminant Analysis), "rf" (Random Forest), "gbm" (Gradient Boosting Machine), "svm" (Support Vector Machines), "glmnet" (Elastic-net Generalized Linear Model), and "pam" (Prediction Analysis of Microarrays)

k

Number of bootstrapped interations

p

Percent of data to by 'trained'

f

Number of features desired. If rank correlation is desired, set "f = NULL"

stability.metric

string indicating the type of stability metric. Avialable options are "jaccard" (Jaccard Index/Tanimoto Distance), "sorensen" (Dice-Sorensen's Index), "ochiai" (Ochiai's Index), "pof" (Percent of Overlapping Features), "kuncheva" (Kuncheva's Stability Measures), "spearman" (Spearman Rank Correlation), and "canberra" (Canberra Distance)

optimize

Logical argument determining if each model should be optimized. Default "optimize = TRUE"

optimize.resample

Logical argument determining if each resample should be re-optimized. Default "optimize.resample = FALSE" - Only one optimization run, subsequent models use initially determined parameters

tuning.grid

Optional list of grids containing parameters to optimize for each algorithm. Default "tuning.grid = NULL" lets function create grid determined by "res"

k.folds

Number of folds generated during cross-validation. May optionally be set to "LOO" for leave-one-out cross-validation. Default "k.folds = 10"

repeats

Number of times cross-validation repeated. Default "repeats = 3"

resolution

Resolution of model optimization grid. Default "resolution = 3"

metric

Criteria for model optimization. Available options are "Accuracy" (Predication Accuracy), "Kappa" (Kappa Statistic), and "AUC-ROC" (Area Under the Curve - Receiver Operator Curve)

model.features

Logical argument if should have number of features selected to be determined by the individual model runs. Default "model.features = FALSE"

allowParallel

Logical argument dictating if parallel processing is allowed via foreach package. Default allowParallel = FALSE

verbose

Character argument specifying how much output progress to print. Options are 'none', 'minimal' or 'full'.

...

Extra arguments that the user would like to apply to the models

Value

Methods

Vector of models fit to data

performance

Performance metrics of each model and bootstrap iteration

RPT

Robustness-Performance Trade-Off as defined in Saeys 2008

features

List concerning features determined via each algorithms feature selection criteria.

stability.models

Function perturbation metric - i.e. how similar are the features selected by each model.

original.best.tunes

If "optimize.resample = TRUE" then returns list of optimized parameters for each bootstrap.

final.best.tunes

If "optimize.resample = TRUE" then returns list of optimized parameters for each bootstrap of models refit to selected features.

specs

List with the following elements:

Author(s)

Charles Determan Jr

References

Saeys Y., Abeel T., et. al. (2008) Machine Learning and Knowledge Discovery in Databases. 313-325. http://link.springer.com/chapter/10.1007/978-3-540-87481-2_21

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
dat.discr <- create.discr.matrix(
    create.corr.matrix(
        create.random.matrix(nvar = 50, 
                             nsamp = 100, 
                             st.dev = 1, 
                             perturb = 0.2)),
    D = 10
)

vars <- dat.discr$discr.mat
groups <- dat.discr$classes

fits <- fs.stability(vars, 
                     groups, 
                     method = c("plsda", "rf"), 
                     f = 10, 
                     k = 3, 
                     k.folds = 10, 
                     verbose = 'none')

OmicsMarkeR documentation built on April 28, 2020, 6:54 p.m.