Evaluation for Support Vector Machines (SVM) by crossvalidation
1 2 
X 
standardized complete X data matrix (training and test data) 
grp 
factor with groups for complete data (training and test data) 
train 
row indices of X indicating training data objects 
kfold 
number of folds for crossvalidation 
gamvec 
range for gammavalues, see 
kernel 
kernel to be used for SVM, should be one of "radial", "linear",
"polynomial", "sigmoid", default to "radial", see 
degree 
degree of polynome if kernel is "polynomial", default to 3, see

plotit 
if TRUE a plot will be generated 
legend 
if TRUE a legend will be added to the plot 
legpos 
positioning of the legend in the plot 
... 
additional plot arguments 
The data are split into a calibration and a test data set (provided by "train"). Within the calibration set "kfold"fold CV is performed by applying the classification method to "kfold"1 parts and evaluation for the last part. The misclassification error is then computed for the training data, for the CV test data (CV error) and for the test data.
trainerr 
training error rate 
testerr 
test error rate 
cvMean 
mean of CV errors 
cvSe 
standard error of CV errors 
cverr 
all errors from CV 
gamvec 
range for gammavalues, taken from input 
Peter Filzmoser <P.Filzmoser@tuwien.ac.at>
K. Varmuza and P. Filzmoser: Introduction to Multivariate Statistical Analysis in Chemometrics. CRC Press, Boca Raton, FL, 2009.
1 2 3 4 5 6 7 8 9 10 11 12 13  data(fgl,package="MASS")
grp=fgl$type
X=scale(fgl[,1:9])
k=length(unique(grp))
dat=data.frame(grp,X)
n=nrow(X)
ntrain=round(n*2/3)
require(e1071)
set.seed(143)
train=sample(1:n,ntrain)
ressvm=svmEval(X,grp,train,gamvec=c(0,0.05,0.1,0.2,0.3,0.5,1,2,5),
legpos="topright")
title("Support vector machines")

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
Please suggest features or report bugs with the GitHub issue tracker.
All documentation is copyright its authors; we didn't write any of that.