ksvm | R Documentation |
Support Vector Machines are an excellent tool for classification,
novelty detection, and regression. ksvm
supports the
well known C-svc, nu-svc, (classification) one-class-svc (novelty)
eps-svr, nu-svr (regression) formulations along with
native multi-class classification formulations and
the bound-constraint SVM formulations.
ksvm
also supports class-probabilities output and
confidence intervals for regression.
## S4 method for signature 'formula'
ksvm(x, data = NULL, ..., subset, na.action = na.omit, scaled = TRUE)
## S4 method for signature 'vector'
ksvm(x, ...)
## S4 method for signature 'matrix'
ksvm(x, y = NULL, scaled = TRUE, type = NULL,
kernel ="rbfdot", kpar = "automatic",
C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE,
class.weights = NULL, cross = 0, fit = TRUE, cache = 40,
tol = 0.001, shrinking = TRUE, ...,
subset, na.action = na.omit)
## S4 method for signature 'kernelMatrix'
ksvm(x, y = NULL, type = NULL,
C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE,
class.weights = NULL, cross = 0, fit = TRUE, cache = 40,
tol = 0.001, shrinking = TRUE, ...)
## S4 method for signature 'list'
ksvm(x, y = NULL, type = NULL,
kernel = "stringdot", kpar = list(length = 4, lambda = 0.5),
C = 1, nu = 0.2, epsilon = 0.1, prob.model = FALSE,
class.weights = NULL, cross = 0, fit = TRUE, cache = 40,
tol = 0.001, shrinking = TRUE, ...,
na.action = na.omit)
x |
a symbolic description of the model to be fit. When not
using a formula x can be a matrix or vector containing the training
data
or a kernel matrix of class |
data |
an optional data frame containing the training data, when using a formula. By default the data is taken from the environment which ‘ksvm’ is called from. |
y |
a response vector with one label for each row/component of |
scaled |
A logical vector indicating the variables to be
scaled. If |
type |
|
kernel |
the kernel function used in training and predicting.
This parameter can be set to any function, of class kernel, which
computes the inner product in feature space between two
vector arguments (see
Setting the kernel parameter to "matrix" treats The kernel parameter can also be set to a user defined function of class kernel by passing the function name as an argument. |
kpar |
the list of hyper-parameters (kernel parameters). This is a list which contains the parameters to be used with the kernel function. For valid parameters for existing kernels are :
Hyper-parameters for user defined kernels can be passed through the
kpar parameter as well. In the case of a Radial Basis kernel function (Gaussian)
kpar can also be set to the string "automatic" which uses the heuristics in
|
C |
cost of constraints violation (default: 1) this is the ‘C’-constant of the regularization term in the Lagrange formulation. |
nu |
parameter needed for |
epsilon |
epsilon in the insensitive-loss function used for
|
prob.model |
if set to |
class.weights |
a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named. |
cache |
cache memory in MB (default 40) |
tol |
tolerance of termination criterion (default: 0.001) |
shrinking |
option whether to use the shrinking-heuristics
(default: |
cross |
if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification and the Mean Squared Error for regression |
fit |
indicates whether the fitted values should be computed
and included in the model or not (default: |
... |
additional parameters for the low level fitting function |
subset |
An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.) |
na.action |
A function to specify the action to be taken if |
ksvm
uses John Platt's SMO algorithm for solving the SVM QP problem an
most SVM formulations. On the spoc-svc
, kbb-svc
, C-bsvc
and
eps-bsvr
formulations a chunking algorithm based on the TRON QP
solver is used.
For multiclass-classification with k
classes, k > 2
, ksvm
uses the
‘one-against-one’-approach, in which k(k-1)/2
binary classifiers are
trained; the appropriate class is found by a voting scheme,
The spoc-svc
and the kbb-svc
formulations deal with the
multiclass-classification problems by solving a single quadratic problem involving all the classes.
If the predictor variables include factors, the formula interface must be used to get a
correct model matrix.
In classification when prob.model
is TRUE
a 3-fold cross validation is
performed on the data and a sigmoid function is fitted on the
resulting decision values f
.
The data can be passed to the ksvm
function in a matrix
or a
data.frame
, in addition ksvm
also supports input in the form of a
kernel matrix of class kernelMatrix
or as a list of character
vectors where a string kernel has to be used.
The plot
function for binary classification ksvm
objects
displays a contour plot of the decision values with the corresponding
support vectors highlighted.
The predict function can return class probabilities for
classification problems by setting the type
parameter to
"probabilities".
The problem of model selection is partially addressed by an empirical
observation for the RBF kernels (Gaussian , Laplace) where the optimal values of the
sigma
width parameter are shown to lie in between the 0.1 and 0.9
quantile of the \|x- x'\|
statistics. When using an RBF kernel
and setting kpar
to "automatic", ksvm
uses the sigest
function
to estimate the quantiles and uses the median of the values.
An S4 object of class "ksvm"
containing the fitted model,
Accessor functions can be used to access the slots of the object (see
examples) which include:
alpha |
The resulting support vectors, (alpha vector) (possibly scaled). |
alphaindex |
The index of the resulting support vectors in the data
matrix. Note that this index refers to the pre-processed data (after
the possible effect of |
coef |
The corresponding coefficients times the training labels. |
b |
The negative intercept. |
nSV |
The number of Support Vectors |
obj |
The value of the objective function. In case of one-against-one classification this is a vector of values |
error |
Training error |
cross |
Cross validation error, (when cross > 0) |
prob.model |
Contains the width of the Laplacian fitted on the residuals in case of regression, or the parameters of the sigmoid fitted on the decision values in case of classification. |
Data is scaled internally by default, usually yielding better results.
Alexandros Karatzoglou (SMO optimizers in C++ by Chih-Chung Chang & Chih-Jen Lin)
alexandros.karatzoglou@ci.tuwien.ac.at
Chang Chih-Chung, Lin Chih-Jen
LIBSVM: a library for Support Vector Machines
https://www.csie.ntu.edu.tw/~cjlin/libsvm/
Chih-Wei Hsu, Chih-Jen Lin
BSVM
https://www.csie.ntu.edu.tw/~cjlin/bsvm/
J. Platt
Probabilistic outputs for support vector machines and comparison to regularized likelihood methods
Advances in Large Margin Classifiers, A. Smola, P. Bartlett, B. Schoelkopf and D. Schuurmans, Eds. Cambridge, MA: MIT Press, 2000.
H.-T. Lin, C.-J. Lin and R. C. Weng
A note on Platt's probabilistic outputs for support vector machines
https://www.csie.ntu.edu.tw/~htlin/paper/doc/plattprob.pdf
C.-W. Hsu and C.-J. Lin
A comparison on methods for multi-class support vector machines
IEEE Transactions on Neural Networks, 13(2002) 415-425.
https://www.csie.ntu.edu.tw/~cjlin/papers/multisvm.pdf
K. Crammer, Y. Singer
On the learnability and design of output codes for multiclass prolems
Computational Learning Theory, 35-46, 2000.
http://www.learningtheory.org/colt2000/papers/CrammerSinger.pdf
J. Weston, C. Watkins
Multi-class support vector machines.
Technical Report CSD-TR-98-04,
Royal Holloway, University of London, Department of Computer Science.
predict.ksvm
, ksvm-class
, couple
## simple example using the spam data set
data(spam)
## create test and training set
index <- sample(1:dim(spam)[1])
spamtrain <- spam[index[1:floor(dim(spam)[1]/2)], ]
spamtest <- spam[index[((ceiling(dim(spam)[1]/2)) + 1):dim(spam)[1]], ]
## train a support vector machine
filter <- ksvm(type~.,data=spamtrain,kernel="rbfdot",
kpar=list(sigma=0.05),C=5,cross=3)
filter
## predict mail type on the test set
mailtype <- predict(filter,spamtest[,-58])
## Check results
table(mailtype,spamtest[,58])
## Another example with the famous iris data
data(iris)
## Create a kernel function using the build in rbfdot function
rbf <- rbfdot(sigma=0.1)
rbf
## train a bound constraint support vector machine
irismodel <- ksvm(Species~.,data=iris,type="C-bsvc",
kernel=rbf,C=10,prob.model=TRUE)
irismodel
## get fitted values
fitted(irismodel)
## Test on the training set with probabilities as output
predict(irismodel, iris[,-5], type="probabilities")
## Demo of the plot function
x <- rbind(matrix(rnorm(120),,2),matrix(rnorm(120,mean=3),,2))
y <- matrix(c(rep(1,60),rep(-1,60)))
svp <- ksvm(x,y,type="C-svc")
plot(svp,data=x)
### Use kernelMatrix
K <- as.kernelMatrix(crossprod(t(x)))
svp2 <- ksvm(K, y, type="C-svc")
svp2
# test data
xtest <- rbind(matrix(rnorm(20),,2),matrix(rnorm(20,mean=3),,2))
# test kernel matrix i.e. inner/kernel product of test data with
# Support Vectors
Ktest <- as.kernelMatrix(crossprod(t(xtest),t(x[SVindex(svp2), ])))
predict(svp2, Ktest)
#### Use custom kernel
k <- function(x,y) {(sum(x*y) +1)*exp(-0.001*sum((x-y)^2))}
class(k) <- "kernel"
data(promotergene)
## train svm using custom kernel
gene <- ksvm(Class~.,data=promotergene[c(1:20, 80:100),],kernel=k,
C=5,cross=5)
gene
#### Use text with string kernels
data(reuters)
is(reuters)
tsv <- ksvm(reuters,rlabels,kernel="stringdot",
kpar=list(length=5),cross=3,C=10)
tsv
## regression
# create data
x <- seq(-20,20,0.1)
y <- sin(x)/x + rnorm(401,sd=0.03)
# train support vector machine
regm <- ksvm(x,y,epsilon=0.01,kpar=list(sigma=16),cross=3)
plot(x,y,type="l")
lines(x,predict(regm,x),col="red")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.