linear_svm: Linear SVM is an L2-regularized support vector machine.

View source: R/linear_svm.R

linear_svmR Documentation

Linear SVM is an L2-regularized support vector machine.

Description

An implementation of linear SVM for multiclass classification. Given labeled data, a model can be trained and saved for future use; or, a pre-trained model can be used to classify new points.

Usage

linear_svm(
  delta = NA,
  epochs = NA,
  input_model = NA,
  labels = NA,
  lambda = NA,
  max_iterations = NA,
  no_intercept = FALSE,
  num_classes = NA,
  optimizer = NA,
  seed = NA,
  shuffle = FALSE,
  step_size = NA,
  test = NA,
  test_labels = NA,
  tolerance = NA,
  training = NA,
  verbose = getOption("mlpack.verbose", FALSE)
)

Arguments

delta

Margin of difference between correct class and other classes. Default value "1" (numeric).

epochs

Maximum number of full epochs over dataset for psg. Default value "50" (integer).

input_model

Existing model (parameters) (LinearSVMModel).

labels

A matrix containing labels (0 or 1) for the points in the training set (y) (integer row).

lambda

L2-regularization parameter for training. Default value "0.0001" (numeric).

max_iterations

Maximum iterations for optimizer (0 indicates no limit). Default value "10000" (integer).

no_intercept

Do not add the intercept term to the model. Default value "FALSE" (logical).

num_classes

Number of classes for classification; if unspecified (or 0), the number of classes found in the labels will be used. Default value "0" (integer).

optimizer

Optimizer to use for training ('lbfgs' or 'psgd'). Default value "lbfgs" (character).

seed

Random seed. If 0, 'std::time(NULL)' is used. Default value "0" (integer).

shuffle

Don't shuffle the order in which data points are visited for parallel SGD. Default value "FALSE" (logical).

step_size

Step size for parallel SGD optimizer. Default value "0.01" (numeric).

test

Matrix containing test dataset (numeric matrix).

test_labels

Matrix containing test labels (integer row).

tolerance

Convergence tolerance for optimizer. Default value "1e-10" (numeric).

training

A matrix containing the training set (the matrix of predictors, X) (numeric matrix).

verbose

Display informational messages and the full list of parameters and timers at the end of execution. Default value "getOption("mlpack.verbose", FALSE)" (logical).

Details

An implementation of linear SVMs that uses either L-BFGS or parallel SGD (stochastic gradient descent) to train the model.

This program allows loading a linear SVM model (via the "input_model" parameter) or training a linear SVM model given training data (specified with the "training" parameter), or both those things at once. In addition, this program allows classification on a test dataset (specified with the "test" parameter) and the classification results may be saved with the "predictions" output parameter. The trained linear SVM model may be saved using the "output_model" output parameter.

The training data, if specified, may have class labels as its last dimension. Alternately, the "labels" parameter may be used to specify a separate vector of labels.

When a model is being trained, there are many options. L2 regularization (to prevent overfitting) can be specified with the "lambda" option, and the number of classes can be manually specified with the "num_classes"and if an intercept term is not desired in the model, the "no_intercept" parameter can be specified.Margin of difference between correct class and other classes can be specified with the "delta" option.The optimizer used to train the model can be specified with the "optimizer" parameter. Available options are 'psgd' (parallel stochastic gradient descent) and 'lbfgs' (the L-BFGS optimizer). There are also various parameters for the optimizer; the "max_iterations" parameter specifies the maximum number of allowed iterations, and the "tolerance" parameter specifies the tolerance for convergence. For the parallel SGD optimizer, the "step_size" parameter controls the step size taken at each iteration by the optimizer and the maximum number of epochs (specified with "epochs"). If the objective function for your data is oscillating between Inf and 0, the step size is probably too large. There are more parameters for the optimizers, but the C++ interface must be used to access these.

Optionally, the model can be used to predict the labels for another matrix of data points, if "test" is specified. The "test" parameter can be specified without the "training" parameter, so long as an existing linear SVM model is given with the "input_model" parameter. The output predictions from the linear SVM model may be saved with the "predictions" parameter.

Value

A list with several components:

output_model

Output for trained linear svm model (LinearSVMModel).

predictions

If test data is specified, this matrix is where the predictions for the test set will be saved (integer row).

probabilities

If test data is specified, this matrix is where the class probabilities for the test set will be saved (numeric matrix).

Author(s)

mlpack developers

Examples

# As an example, to train a LinaerSVM on the data '"data"' with labels
# '"labels"' with L2 regularization of 0.1, saving the model to
# '"lsvm_model"', the following command may be used:

## Not run: 
output <- linear_svm(training=data, labels=labels, lambda=0.1, delta=1,
  num_classes=0)
lsvm_model <- output$output_model

## End(Not run)

# Then, to use that model to predict classes for the dataset '"test"',
# storing the output predictions in '"predictions"', the following command
# may be used: 

## Not run: 
output <- linear_svm(input_model=lsvm_model, test=test)
predictions <- output$predictions

## End(Not run)

mlpack documentation built on June 22, 2024, 9:36 a.m.

Related to linear_svm in mlpack...