mlSvm: Supervised classification and regression using support vector...

View source: R/ml_svm.R

mlSvmR Documentation

Supervised classification and regression using support vector machine

Description

Unified (formula-based) interface version of the support vector machine algorithm provided by e1071::svm().

Usage

mlSvm(train, ...)

ml_svm(train, ...)

## S3 method for class 'formula'
mlSvm(
  formula,
  data,
  scale = TRUE,
  type = NULL,
  kernel = "radial",
  classwt = NULL,
  ...,
  subset,
  na.action
)

## Default S3 method:
mlSvm(
  train,
  response,
  scale = TRUE,
  type = NULL,
  kernel = "radial",
  classwt = NULL,
  ...
)

## S3 method for class 'mlSvm'
predict(
  object,
  newdata,
  type = c("class", "membership", "both"),
  method = c("direct", "cv"),
  na.action = na.exclude,
  ...
)

Arguments

train

a matrix or data frame with predictors.

...

further arguments passed to the classification or regression method. See e1071::svm().

formula

a formula with left term being the factor variable to predict (for supervised classification), a vector of numbers (for regression) or nothing (for unsupervised classification) and the right term with the list of independent, predictive variables, separated with a plus sign. If the data frame provided contains only the dependent and independent variables, one can use the class ~ . short version (that one is strongly encouraged). Variables with minus sign are eliminated. Calculations on variables are possible according to usual formula convention (possibly protected by using I()).

data

a data.frame to use as a training set.

scale

are the variables scaled (so that mean = 0 and standard deviation = 1)? TRUE by default. If a vector is provided, it is applied to variables with recycling.

type

For ml_svm()/mlSvm(), the type of classification or regression machine to use. The default value of NULL uses "C-classification" if response variable is factor and eps-regression if it is numeric. It can also be "nu-classification" or "nu-regression". The "C" and "nu" versions are basically the same but with a different parameterisation. The range of C is from zero to infinity, while the range for nu is from zero to one. A fifth option is "one_classification" that is specific to novelty detection (find the items that are different from the rest). For predict(), the type of prediction to return. "class" by default, the predicted classes. Other options are "membership" the membership (number between 0 and 1) to the different classes, or "both" to return classes and memberships.

kernel

the kernel used by svm, see e1071::svm() for further explanations. Can be "radial", "linear", "polynomial" or "sigmoid".

classwt

priors of the classes. Need not add up to one.

subset

index vector with the cases to define the training set in use (this argument must be named, if provided).

na.action

function to specify the action to be taken if NAs are found. For ml_svm() na.fail is used by default. The calculation is stopped if there is any NA in the data. Another option is na.omit, where cases with missing values on any required variable are dropped (this argument must be named, if provided). For the predict() method, the default, and most suitable option, is na.exclude. In that case, rows with NAs in ⁠newdata=⁠ are excluded from prediction, but reinjected in the final results so that the number of items is still the same (and in the same order as ⁠newdata=⁠).

response

a vector of factor (classification) or numeric (regression).

object

an mlSvm object

newdata

a new dataset with same conformation as the training set (same variables, except may by the class for classification or dependent variable for regression). Usually a test set, or a new dataset to be predicted.

method

"direct" (default) or "cv". "direct" predicts new cases in ⁠newdata=⁠ if this argument is provided, or the cases in the training set if not. Take care that not providing ⁠newdata=⁠ means that you just calculate the self-consistency of the classifier but cannot use the metrics derived from these results for the assessment of its performances. Either use a different data set in ⁠newdata=⁠ or use the alternate cross-validation ("cv") technique. If you specify method = "cv" then cvpredict() is used and you cannot provide ⁠newdata=⁠ in that case.

Value

ml_svm()/mlSvm() creates an mlSvm, mlearning object containing the classifier and a lot of additional metadata used by the functions and methods you can apply to it like predict() or cvpredict(). In case you want to program new functions or extract specific components, inspect the "unclassed" object using unclass().

See Also

mlearning(), cvpredict(), confusion(), also e1071::svm() that actually does the calculation.

Examples

# Prepare data: split into training set (2/3) and test set (1/3)
data("iris", package = "datasets")
train <- c(1:34, 51:83, 101:133)
iris_train <- iris[train, ]
iris_test <- iris[-train, ]
# One case with missing data in train set, and another case in test set
iris_train[1, 1] <- NA
iris_test[25, 2] <- NA

iris_svm <- ml_svm(data = iris_train, Species ~ .)
summary(iris_svm)
predict(iris_svm) # Default type is class
predict(iris_svm, type = "membership")
predict(iris_svm, type = "both")
# Self-consistency, do not use for assessing classifier performances!
confusion(iris_svm)
# Use an independent test set instead
confusion(predict(iris_svm, newdata = iris_test), iris_test$Species)

# Another dataset
data("HouseVotes84", package = "mlbench")
house_svm <- ml_svm(data = HouseVotes84, Class ~ ., na.action = na.omit)
summary(house_svm)
# Cross-validated confusion matrix
confusion(cvpredict(house_svm), na.omit(HouseVotes84)$Class)

# Regression using support vector machine
data(airquality, package = "datasets")
ozone_svm <- ml_svm(data = airquality, Ozone ~ ., na.action = na.omit)
summary(ozone_svm)
plot(na.omit(airquality)$Ozone, predict(ozone_svm))
abline(a = 0, b = 1)

mlearning documentation built on Aug. 31, 2023, 1:09 a.m.