irsvm_fit: Fit iteratively reweighted support vector machines for robust...

View source: R/irsvm.R

irsvm_fitR Documentation

Fit iteratively reweighted support vector machines for robust loss functions

Description

irsvm_fit is used to train a subject weighted support vector machine where the weights are provided iteratively from robust loss function with the iteratively reweighted convex optimization (IRCO). It can be used to carry out robust regression and binary classification. This does computing for the wrapper function irsvm.

Usage

irsvm_fit(x, y, weights, cfun="ccave", s=NULL, delta=0.0001, type = NULL, 
          kernel="radial", cost=1, epsilon = 0.1, iter=10, reltol=1e-5, 
          trace=FALSE, ...)

Arguments

x

a data matrix, a vector, or a sparse 'design matrix' (object of class Matrix provided by the Matrix package, or of class matrix.csr provided by the SparseM package, or of class simple_triplet_matrix provided by the slam package).

y

a response vector with one label for each row/component of x. Can be either a factor (for classification tasks) or a numeric vector (for regression).

weights

the weight of each subject. It should be in the same length of y.

cfun

character, type of convex cap (concave) function.
Valid options are:

  • "hcave"

  • "acave"

  • "bcave"

  • "ccave"

  • "dcave"

  • "ecave"

  • "gcave"

  • "tcave"

s

tuning parameter of cfun. s > 0 and can be equal to 0 for cfun="tcave". If s is too close to 0 for cfun="acave", "bcave", "ccave", the calculated weights can become 0 for all observations, thus crash the program.

delta

a small positive number provided by user only if cfun="gcave" and 0 < s <1

type

irsvm_fit can be used as a classification machine, or as a regression machine. Depending of whether y is a factor or not, the default setting for type is C-classification or eps-regression, respectively, but may be overwritten by setting an explicit value.
Valid options are:

  • C-classification

  • nu-classification

  • eps-regression

  • nu-regression

kernel

the kernel used in training and predicting. You might consider changing some of the following parameters, depending on the kernel type.

linear:

u'*v

polynomial:

(gamma*u'*v + coef0)^degree

radial basis:

exp(-gamma*|u-v|^2)

sigmoid:

tanh(gamma*u'*v + coef0)

cost

cost of constraints violation (default: 1)—it is the ‘C’-constant of the regularization term in the Lagrange formulation. This is proportional to the inverse of lambda in irglmreg.

epsilon

epsilon in the insensitive-loss function (default: 0.1)

iter

number of iteration in the IRCO algorithm

reltol

convergency criteria in the IRCO algorithm

trace

If TRUE, fitting progress is reported

...

additional parameters for function wsvm in package WeightSVM

Details

A case weighted SVM is fit by the IRCO algorithm, where the loss function is a composite function of cfunotype, plus a L\_2 penalty. Additional arguments include degree, gamma, coef0, class.weights, cachesize, tolerance, shrinking, propbability, fitted, the same as "wsvm" in package WeightSVM.

Value

An object of class "wsvm" (see package WeightSVM) containing the fitted model, including:

SV

The resulting support vectors (possibly scaled).

index

The index of the resulting support vectors in the data matrix. Note that this index refers to the preprocessed data (after the possible effect of na.omit and subset)

coefs

The corresponding coefficients times the training labels.

rho

The negative intercept.

sigma

In case of a probabilistic regression model, the scale parameter of the hypothesized (zero-mean) laplace distribution estimated by maximum likelihood.

probA, probB

numeric vectors of length 2, number of classes, containing the parameters of the logistic distributions fitted to the decision values of the binary classifiers (1 / (1 + exp(a x + b))).

Author(s)

Zhu Wang zwang145@uthsc.edu

References

Zhu Wang (2020) Unified Robust Estimation, arXiv e-prints, https://arxiv.org/abs/2010.02848

See Also

irsvm, print, predict, coef and plot.

Examples

data(iris)
 iris <- subset(iris, Species %in% c("setosa", "versicolor"))
 # default with factor response:
  model <- irsvm(Species ~ ., data = iris, kernel="linear", trace=TRUE)
  model <- irsvm(Species ~ ., data = iris)
 # alternatively the traditional interface:
  x <- subset(iris, select = -Species)
  y <- iris$Species
model <- irsvm(x, y)
  # test with train data
  pred <- predict(model, x)
  # (same as:)
  pred <- fitted(model)
 
  # Check accuracy:
  table(pred, y)
 # compute decision values and probabilities:
  pred <- predict(model, x, decision.values = TRUE)
  attr(pred, "decision.values")
 
  # visualize (classes by color, SV by crosses):
  plot(cmdscale(dist(iris[,-5])),
       col = as.integer(iris[,5]),
       pch = c("o","+")[1:100 %in% model$index + 1])
 
  ## try regression mode on two dimensions
 
  # create data
  x <- seq(0.1, 5, by = 0.05)
  y <- log(x) + rnorm(x, sd = 0.2)
 
  # estimate model and predict input values
  m   <- irsvm(x, y)
  new <- predict(m, x)
 
  # visualize
 plot(x, y)
  points(x, log(x), col = 2)
  points(x, new, col = 4) 

mpath documentation built on Jan. 7, 2023, 1:17 a.m.