These functions are used to apply the generic train-and-test mechanism to a classifier that combines principal component analysis (PCA) with logistic regression (LR).
The data matrix, with rows as features ("genes") and columns as the samples to be classified.
A factor, with two levels, classifying the samples. The length must
equal the number of
A list of additional parameters used by the classifier; see Details.
The function used to make predictions on new data, using the
trained classifier. Should always be set to
Another data matrix, with the same number of rows as
A list of additional parameters describing details about the particular classifier; see Details.
Optional extra parameters required by the generic "predict" method.
The input arguments to both
are dictated by the requirements of the general train-and-test
mechanism provided by the
The SVM classifier is similar in spirit to the "supervised principal
components" method implemented in the
superpc package. We
start by performing univariate two-sample t-tests to identify features
that are differentially expressed between two groups of training
samples. We then set a cutoff to select features using a bound
alpha) on the false discovery rate (FDR). If the number of
selected features is smaller than a prespecified goal
minNgenes), then we increase the FDR until we get the desired
number of features. Next, we perform PCA on the selected features
from the trqining data. we retain enough principal components (PCs)
to explain a prespecified fraction of the variance (
We then fit a logistic regression model using these PCs to predict the
binary class of the training data. In order to use this model to make
binary predictions, you must specify a
prior probability that a
sample belongs to the first of the two groups (where the ordering is
determined by the levels of the classification factor,
In order to fit the model to data, the
params argument to the
learnSVM function should be a list containing components
It may also contain a logical value called
controls the amount of information that is output as the algorithm runs.
The result of fitting the model using
learnSVM is a member of
FittedModel-class. In additon to storing the
prediction function (
pfun) and the training data and status,
the FittedModel stores those details about the model that are required
in order to make predictions of the outcome on new data. In this
acse, the details are: the
prior probability, the set of
selected features (
sel, a logical vector), the principal
component decomposition (
spca, an object of the
SamplePCA class), the logistic
regression model (
mmod, of class
glm), the number
of PCs used (
nCompUsed) as well as the number of components
nCompAvail) and the number of gene-features selected
details object is appropriate for
sending as the second argument to the
predictSVM function in
order to make predictions with the model on new data. Note that the
status vector here is the one used for the training data, since
the prediction function only uses the levels of this factor to
make sure that the direction of the predicitons is interpreted
learnSVM function returns an object of the
FittedModel-class, representing a SVM classifier
that has been fitted on a training
predictSVM function returns a factor containing the
predictions of the model when applied to the new data set.
Kevin R. Coombes <firstname.lastname@example.org>
1 2 3 4 5 6 7 8 9 10 11 12 13
# simulate some data data <- matrix(rnorm(100*20), ncol=20) status <- factor(rep(c("A", "B"), each=10)) # set up the parameter list svm.params <- list(minNgenes=10, alpha=0.10, perVar=0.80, prior=0.5) # learn the model fm <- learnSVM(data, status, svm.params, predictSVM) # Make predictions on some new simulated data newdata <- matrix(rnorm(100*30), ncol=30) predictSVM(newdata, fm@details, status)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.