train.svr: Train SVR

Description Usage Arguments Value Examples

Description

Train SVR

Usage

1
2
train.svr(xtrain, ytrain, hypparameter, ErrorFunc, PercentValid = 20, 
          kfold = 1, SplitRandom = FALSE, kernel = "radial")

Arguments

xtrain

Data matrix (numeric) containing the input values (predictors) used to train the model.

ytrain

Response vector (numeric) used to train the model.

hypparameter

Hyper-parameters of the model.

ErrorFunc

Error function to be minimized.

PercentValid

Percentage of the data reserved for validation. Default is 20%.

kfold

if a integer value k>1 is specified, a k-fold cross validation on the training data is performed. Default is 1.

SplitRandom

Option whether to split the train set randomly. Default is FALSE.

kernel

Kernel function to be used. Default is 'radial'.

Value

hypparameter

Hyper-parameter of the best trained model.

forecast

A vector of predicted values generated by the best trained model.

svmf

An object of class "svm" containing the fitted model.

ffTrain

Error value of the training based on ErrorFunc. (It is NULL if kfold > 1)

ffValid

Error value of the validation based on ErrorFunc.

stepvars

Variables selected by the prescreening methods when they are used.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
##---- Should be DIRECTLY executable !! ----
##-- ==>  Define data, use random,
##--	or do  help(data=index)  for the standard data sets.

## The function is currently defined as
function (xtrain, ytrain, hypparameter, ErrorFunc, PercentValid = 20, 
    kfold = 1, SplitRandom = FALSE, kernel = "radial") 
{
    if (kfold > 1) {
        n.cases = nrow(xtrain)
        index.block <- xval.buffer(n.cases, kfold)
        pred.valid <- rep(Inf, n.cases)
        for (nb in 1:kfold) {
            svr.try <- try(svm(xtrain[index.block[[nb]]$train, 
                , drop = FALSE], ytrain[index.block[[nb]]$train, 
                , drop = FALSE], kernel = kernel, gamma = hypparameter[1], 
                epsilon = hypparameter[2], cost = hypparameter[3]), 
                silent = TRUE)
            if (class(svr.try) != "try-error") {
                pred.valid[index.block[[nb]]$valid] = predict(svr.try, 
                  xtrain[index.block[[nb]]$valid, , drop = FALSE])
            }
            else {
                return(list(error.svm = TRUE))
            }
        }
        return(list(error.svm = FALSE, fitted = pred.valid, ffValid = ErrorFunc(ytrain, 
            pred.valid)))
    }
    else {
        nTrain = nrow(xtrain)
        indValid <- nTrain - round((nTrain * (PercentValid/100)))
        if (SplitRandom) {
            cases <- sample(nTrain)
            x.fit <- x.train[cases, , drop = FALSE]
            y.fit <- y.train[cases]
        }
        x.fit.train <- xtrain[1:indValid, , drop = FALSE]
        x.fit.valid <- xtrain[(indValid + 1):nTrain, , drop = FALSE]
        y.fit.train <- ytrain[1:indValid]
        y.fit.valid <- ytrain[(indValid + 1):nTrain]
        svr.try <- try(svm(x.fit.train, y.fit.train, kernel = "radial", 
            gamma = hypparameter[1], epsilon = hypparameter[2], 
            cost = hypparameter[3]), silent = TRUE)
        if (class(svr.try) != "try-error") {
            sv <- svr.try
            return(list(error.svm = FALSE, fitted.train = sv$fitted, 
                fitted.valid = predict(sv, x.fit.valid), ffValid = ErrorFunc(y.fit.valid, 
                  predict(sv, x.fit.valid)), ffTrain = ErrorFunc(y.fit.train, 
                  sv$fitted)))
        }
        else {
            return(list(error.svm = TRUE))
        }
    }
  }

ForAI documentation built on May 2, 2019, 6:14 p.m.