predictDSC: Phenotype prediction using microarray data: approach of the...

Description Usage Arguments Details Value Author(s) References See Also Examples

View source: R/predictDSC.R

Description

This function implements the classification pipeline of the best overall team (Team221) in the IMPROVER Diagnostic Signature Challenge. The function ofers also eploring other combinations of data preprocessing, feature selection and classifier types.

Usage

1
2
3
predictDSC(ano,celfile.path,annotation,preprocs=c("rma","gcrma","mas5"),
filters=c("mttest","ttest","wilcox"),classifiers=c("LDA","kNN","svm"),FCT=1.0, 
CVP=4,NF=10,by=ifelse(NF>10,2,1), NR=5)

Arguments

ano

A data frame with two columns: files and group giving the names of the Affymetrix .cel files (no full path) and their corresponding groups. Only two groups are allowed as well as a third group called "Test". The samples corresponding to these will not be used in training but will be used to normalize the training data with.

celfile.path

The location of the directory where the .cel files are located.

annotation

The names of a package that can be used to map the probesets to the ENTREZ gene IDS in order to deal with duplicate probesets pre gene. E.g.hgu133plues2.db

preprocs

A character vector giving the names of the normalization methods to try. Supported options are "rma","gcrma","mas5"

filters

A character vector giving the names of the methods to use to rank features. Supported options are "mttest" for moderated t-test using limma package,"ttest" for regular t-test, and "wilcox" for wilcoxon test.

classifiers

A character vector giving the names of the classifier types to use for learning the relation between expression levels and phenotype. Supported options are "LDA","kNN","svm".

FCT

A numeric value giving the fold change threshokd to be used to filter out non-relevant features. Note, setting it to a too large value can produce an error as there need to be at least NF probestes with a fold change larger than FCT in each fold of the cross-validation.

CVP

The number of cross-validation partitions to create (minimum is 2). Do use a CVP value which ensures that at least two samples from the smalest group are kept for testing at each fold. E.g. If you have 10 samples in the smalest of the 2 groups a CVP of 4 would be maximum.

NF

The maximum number of features that would make sense to consider using as predcitors in the models. NF should be less than the number of training samples.

by

The size of the step when searching for the number of features to include. By default th esearch starts with the top 2 features, and a number of "by" features are added up to NF.

NR

An integer number between 1 and Inf giving the number of times the cross-validation should be repeated to ensure a robust solution to the question: how many features to use as predictors in the model?.

Details

See cited documents for more details.

Value

A list object containing one item for each possible combination between the elements of preprocs, filters, and classifiers. Each item of the list contains the following information: predictions - a data frame with the predicted class membership belief value (posterior probability) for each sample (row) and each class (column). features - Names of the Affy probesets used as predictors by the model. A letter "F" is added as suffix to the probeset names. model - A fitted model object as produced by the lda, svm and kNN functions. performanceTr - A matrix giving the number of features tested (NN) mean AUC over all folds and repetitions (meanAUC), and the standard deviation of AUC values accross folds and repeats of the cross-validation. bestAUC - The value of mean AUC corresponding to the optimal number of features chosen.

Author(s)

Adi Laurentiu Tarca <atarca@med.wayne.edu>

References

Adi L. Tarca, Mario Lauria, Michael Unger, Erhan Bilal, Stephanie Boue, Kushal Kumar Dey, Julia Hoeng, Heinz Koeppl, Florian Martin, Pablo Meyer, Preetam Nandy, Raquel Norel, Manuel Peitsch, Jeremy J Rice, Roberto Romero, Gustavo Stolovitzky, Marja Talikka, Yang Xiang, Christoph Zechner, and IMPROVER DSC Collaborators, Strengths and limitations of microarray-based phenotype prediction: Lessons learned from the IMPROVER Diagnostic Signature Challenge. Bioinformatics, submitted 2013.

Tarca AL, Than NG, Romero R, Methodological Approach from the Best Overall Team in the IMPROVER Diagnostic Signature Challenge, Systems Biomedicine, submitted, 2013.

See Also

aggregateDSC

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
library(maPredictDSC)
library(LungCancerACvsSCCGEO)
data(LungCancerACvsSCCGEO)
anoLC
gsLC
table(anoLC$group)

#run a series of methods combinations
modlist=predictDSC(ano=anoLC,celfile.path=system.file("extdata/lungcancer",package="LungCancerACvsSCCGEO"),
annotation="hgu133plus2.db",
preprocs=c("rma"),filters=c("mttest","wilcox"),FCT=1.0,classifiers=c("LDA","kNN"),
CVP=2,NF=4, NR=1)


#rank combinations by the performance on training data (AUC)
trainingAUC=sort(unlist(lapply(modlist,"[[","best_AUC")),decreasing=TRUE)
trainingAUC


#optional step; since we know the class of the test samples, let's see how the
#methods combinations perform on the test data

perfTest=function(out){
perfDSC(pred=out$predictions,gs=gsLC)
}
testPerf=t(data.frame(lapply(modlist,perfTest)))
testPerf=testPerf[order(testPerf[,"AUC"],decreasing=TRUE),]
testPerf

#aggregate predictions from top 3 combinations of methods
best3=names(trainingAUC)[1:3]
aggpred=aggregateDSC(modlist[best3])
#test the aggregated model on the test data
perfDSC(aggpred,gsLC)

maPredictDSC documentation built on Nov. 8, 2020, 5:11 p.m.