classify.stsvm: Training and predicting using stSVM classification methods

Description Usage Arguments Value Author(s) References See Also Examples

View source: R/stSVM.R

Description

Training and predicting using stSVM classification methods

Usage

1
2
3
classify.stsvm(fold, cuts, ex.sum, x, p, a, y, cv.repeat, DEBUG = DEBUG, 
				Gsub=Gsub,  op.method=op.method, op = op, aa = aa, 
				dk = dk, dk.tf = dk.tf, seed = seed, Cs = Cs)

Arguments

fold

number of folds to perform

cuts

list for randomly divide the training set in to x-x-folds CV

ex.sum

expression data

x

expression data

a

constant value of random walk kernel

p

random walk step(s) of random walk kernel

y

a factor of length p comprising the class labels.

cv.repeat

model for one CV training and predicting

DEBUG

show debugging information in screen more or less.

Gsub

an adjacency matrix that represents the underlying biological network.

op.method

Method for selecet optimal feature subgoups: pt is permutation test, sp is span bound.

op

optimal on top op

aa

permutation test steps

dk

Random Walk Kernel matrix of network

dk.tf

cut off p-value of permutation test

seed

seed for random sampling.

Cs

Soft-margin tuning parameter of the SVM. Defaults to 10^c(-3:3).

Value

fold

the recored for test fold

auc

The AUC values of test fold

train

The tranined models for traning folds

feat

The feature selected by each by the train

Author(s)

Yupeng Cun yupeng.cun@gmail.com

References

Yupeng Cun, Holger Frohlich (2013) Network and Data Integration for Biomarker Signature Discovery via Network Smoothed T-Statistics. PLoS ONE 8(9): e73074. doi:10.1371/journal.pone.0073074

See Also

see cv.stsvm

Examples

1
#see cv.stsvm

netClass documentation built on May 29, 2017, 7:18 p.m.