alphasvm is used to train a support vector machine. It can be used to carry out general regression and classification (of nu and epsilontype), as well as densityestimation. A formula interface is provided.
Print alphasvm object
Summary alphasvm object
Print summary.alphasvm object
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22  alphasvm(x, ...)
## S3 method for class 'formula'
alphasvm(formula, data = NULL, ..., subset,
na.action = stats::na.omit, scale = FALSE)
## Default S3 method:
alphasvm(x, y = NULL, scale = FALSE, type = NULL,
kernel = "radial", degree = 3, gamma = if (is.vector(x)) 1 else
1/ncol(x), coef0 = 0, cost = 1, nu = 0.5, class.weights = NULL,
cachesize = 40, tolerance = 0.001, epsilon = 0.1, shrinking = TRUE,
cross = 0, probability = FALSE, fitted = TRUE, alpha = NULL,
mute = TRUE, ..., subset, na.action = stats::na.omit)
## S3 method for class 'alphasvm'
print(x, ...)
## S3 method for class 'alphasvm'
summary(object, ...)
## S3 method for class 'summary.alphasvm'
print(x, ...)

x 
a data matrix, a vector, or a sparse matrix (object of class 
... 
additional parameters for the low level fitting function 
formula 
a symbolic description of the model to be fit. 
data 
an optional data frame containing the variables in the model. By default the variables are taken from the environment which 'svm' is called from. 
subset 
An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.) 
na.action 
A function to specify the action to be taken if 
scale 
A logical vector indicating the variables to be scaled. If scale is of length 1, the value is recycled as many times as needed. Per default, data are scaled internally (both x and y variables) to zero mean and unit variance. The center and scale values are returned and used for later predictions. 
y 
a response vector with one label for each row/component of x. Can be either a factor (for classification tasks) or a numeric vector (for regression). 
type 
svm can be used as a classification machine. The default setting for type is Cclassification, but may be set to nuclassification as well. 
kernel 
the kernel used in training and predicting. You might consider changing some of the following parameters, depending on the kernel type.

degree 
parameter needed for kernel of type 
gamma 
parameter needed for all kernels except 
coef0 
parameter needed for kernels of type 
cost 
cost of constraints violation (default: 1)—it is the ‘C’constant of the regularization term in the Lagrange formulation. 
nu 
parameter needed for 
class.weights 
a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named. 
cachesize 
cache memory in MB (default 40) 
tolerance 
tolerance of termination criterion (default: 0.001) 
epsilon 
epsilon in the insensitiveloss function (default: 0.1) 
shrinking 
option whether to use the shrinkingheuristics (default: 
cross 
if a integer value k>0 is specified, a kfold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification and the Mean Squared Error for regression 
probability 
logical indicating whether the model should allow for probability predictions. 
fitted 
logical indicating whether the fitted values should be computed
and included in the model or not (default: 
alpha 
Initial values for the coefficients (default: 
mute 
a logical value indicating whether to print training information from svm. 
object 
An object of class 
For multiclassclassification with k levels, k>2, libsvm
uses the
‘oneagainstone’approach, in which k(k1)/2 binary classifiers are
trained; the appropriate class is found by a voting scheme.
libsvm
internally uses a sparse data representation, which is
also highlevel supported by the package SparseM.
If the predictor variables include factors, the formula interface must be used to get a correct model matrix.
plot.svm
allows a simple graphical
visualization of classification models.
The probability model for classification fits a logistic distribution using maximum likelihood to the decision values of all binary classifiers, and computes the aposteriori class probabilities for the multiclass problem using quadratic optimization. The probabilistic regression model assumes (zeromean) laplacedistributed errors for the predictions, and estimates the scale parameter using maximum likelihood.
Tong He (based on package e1071
by David Meyer and C/C++ code by ChoJui Hsieh in DivideandConquer kernel SVM (DCSVM) )
Chang, ChihChung and Lin, ChihJen:
LIBSVM: a library for Support Vector Machines
http://www.csie.ntu.edu.tw/~cjlin/libsvm
Exact formulations of models, algorithms, etc. can be found in the
document:
Chang, ChihChung and Lin, ChihJen:
LIBSVM: a library for Support Vector Machines
http://www.csie.ntu.edu.tw/~cjlin/papers/libsvm.ps.gz
More implementation details and speed benchmarks can be found on:
RongEn Fan and PaiHsune Chen and ChihJen Lin:
Working Set Selection Using the Second Order Information for Training SVM
http://www.csie.ntu.edu.tw/~cjlin/papers/quadworkset.pdf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21  data(svmguide1)
svmguide1.t = svmguide1[[2]]
svmguide1 = svmguide1[[1]]
model = alphasvm(x = svmguide1[,1], y = svmguide1[,1], scale = TRUE)
preds = predict(model, svmguide1.t[,1])
table(preds, svmguide1.t[,1])
data(iris)
attach(iris)
# default with factor response:
model = alphasvm(Species ~ ., data = iris)
# get new alpha
new.alpha = matrix(0, nrow(iris),2)
new.alpha[model$index,] = model$coefs
model2 = alphasvm(Species ~ ., data = iris, alpha = new.alpha)
preds = predict(model2, as.matrix(iris[,5]))
table(preds, iris[,5])

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
All documentation is copyright its authors; we didn't write any of that.