Description Usage Arguments Details Value Author(s) References See Also Examples
svmEEG
is used to train a support vector machine classifier of the features selected by the function FeatureEEG
. Internally, this function uses the svm
function available in the e1071
package. Thus, it is recommended to understand the svm
function before using svmEEG
.
1 2 3 4 5 | svmEEG(x, method = "C-classification", scale = TRUE, kernel = "radial",
degree = 3, gamma = if (is.vector(x)) 1 else 1/ncol(x), coef0 = 0,
cost = 1, nu = 0.5, class.weights = NULL, cachesize = 40, tolerance = 0.001,
epsilon = 0.1, shrinking = TRUE, cross = 0, probability = TRUE,
fitted = TRUE, seed = 1L, subset, na.action = na.omit)
|
x |
the features to be classified. Must be a list of class |
method |
the method to be used in |
scale |
a logical vector indicating the variables to be scaled. If |
kernel |
the kernel used in training and predicting. One of: |
degree |
parameter needed for kernel of type |
gamma |
parameter needed for all kernels except |
coef0 |
parameter needed for kernels of type |
cost |
cost of constraints violation (default: 1) - it is the C-constant of the regularization term in the Lagrange formulation. See |
nu |
parameter needed for |
class.weights |
a named vector of weights for the different classes, used for asymmetric class sizes. Not all factor levels have to be supplied (default weight: 1). All components have to be named. See |
cachesize |
cache memory in MB (default 40). See |
tolerance |
tolerance of termination criterion (default 0.001). See |
epsilon |
epsilon in the insensitive-loss function (default: 0.1). See |
shrinking |
option whether to use the shrinking-heuristics (default: |
cross |
if a integer value k>0 is specified, a k-fold cross validation on the training data is performed to assess the quality of the model: the accuracy rate for classification. See |
probability |
logical indicating whether the model should allow for probability predictions. See |
fitted |
logical indicating whether the fitted values should be computed and included in the model or not. See |
seed |
integer seed for |
subset |
an index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named). See |
na.action |
A function to specify the action to be taken if |
Internally, this function uses the svm
function available in the e1071
package.
list |
An object to be used in |
Murilo Coutinho Silva (coutinho.stat@gmail.com), George Freitas von Borries
Hastie, T., Tibshirani, R., Friedman, J. (2009) The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed. Stanford: Springer.
Karatzoglou, A., Meyer, D., Hornik, K. (2006) Support Vector Machines in R. Journal of Statistical Software. Vol 15, issue 9.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | library(eegAnalysis)
###Simulating the data set.
Sim <- randEEG(n.class=2,n.rec=10,n.signals=50,n.channels = 2,
vars = c(2,1))
### Uncomment the next line to choose your own features
# features<-easyFeatures()
### Selecting the features
### The selected features may differ because the algorithm
### uses some random functions
### Obs: features="example" is used to be fast. Use features="default"
### or choose your own set of features.
x<-FeatureEEG(Sim$data,Sim$classes.Id,Sim$rec.Id,features="example",
Alpha=0.05, AlphaCorr=0.9,minacc=0.8,fast=FALSE)
### Calculating the classifier
y<-svmEEG(x)
y$model
### Generating new data to test the classifier
new <- randEEG(n.class=2,n.rec=30,n.signals=50,n.channels = 2,
vars = c(2,1))
### Classifying the new data and counting the number of successes
cont = 0
for(i in 1:30)
{
data<-new$data[which((new$classes.Id==1)&(new$rec.Id==i)),]
if(classifyEEG(y,data)[2]==1) cont = cont + 1
}
for(i in 1:30)
{
data<-new$data[which((new$classes.Id==2)&(new$rec.Id==i)),]
if(classifyEEG(y,data)[2]==2) cont = cont + 1
}
### The correct classification rate:
cont/60
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.