# svmplus: Implementation of SVM Plus In svmplus: Implementation of Support Vector Machines Plus (SVM+)

## Description

Implementation of SVM plus for classification problems.

## Details

The classical machine learning paradigm assumes, training examples in the form of iid pair: $(x_1, y_1), ..., (x_l, y_l), \hspace{1em} x_i \in X, \hspace{1em} y_i \in \{-1, +1\}.$

Training examples are represented as features x_i and the same feature space is required for predicting future observations. However, this approach does not make use of other useful data that is only available at training time; such data is referred to as Privileged Information (PI).

Learning Under Privileged Information (LUPI) is a novel machine learning paradigm. It offers faster convergence of the learning process by exploiting the privileged information. In other words, “fewer training examples are needed to achieve similar predictive performance" or “the same number of examples can provide a better predictive performance". In LUPI paradigm, training examples come in the form of iid triplets

$(x_1,x_1^*, y_1), ..., (x_l,x_l^*, y_l), \hspace{1em} x_i \in X, \hspace{1em} x^*_i \in X^*, \hspace{1em} y_i \in \{-1, +1\}$

where x^* denotes PI. SVM+ is one realization of LUPI paradigm. In SVM+, privileged information is used to estimate a linear model of the slack variables, namely

$ξ_i = (w^*)^T z_i^* + b^*,$

where z_i = φ(x_i) represents the kernel mapping.

The SVM+ objective function is defined as: $\min_{w,b} ≤ft\lbrace \frac{1}{2} w^T w +\frac{γ}{2} (w^*)^T (w^*) + C ∑_{i=1}^l [(w^*)^T z_i^* + b^*] \right\rbrace$

$s. t. \quad y_i (w^T z_i + b) ≥q 1- [(w^*)^T z_i^* + b^*],$

$(w^*)^T z_i^* + b^* ≥q 0, \forall i$

The dual SVM+ problem is defined as follow.

$\max_{w,b} ≤ft\lbrace ∑_{i=1}^l α_i - \frac{1}{2} ∑_{i,j=1}^l α_i α_j y_i y_j K(x_i, x_j) - \frac{1}{2γ} ∑_{i,j=1}^l (α_i+β_i - C) (α_j+β_j - C) K^*(x_i^*, x_j^*) \right\rbrace$

$s. t. \quad ∑_{i=1}^l α_i y_i = 0, \quad ∑_{i=1}^l (α_i+β_i - C) = 0,$

$α_i ≥q 0, \quad β_i ≥q 0$

This package offeres a Quadratic Programming (QP) based convex optimization solution for the dual SVM+ problem. In future, LIBSVM and LibLinear based faster implementaions are planned to be supported. We refer to [1] for theoretical details of LUPI and SVM+, and we refer to [2] for implementation details of SVM+ in MATLAB.

## References

[1] Vladimir et. al, Neural Networks, 2009, 22, pp 544–557. https://doi.org/10.1016/j.neunet.2009.06.042

[2] Li et. al, 2016. https://github.com/okbalefthanded/svmplus_matlab

[3] Bendtsen, C., et al., Ann Math Artif Intell, 2017, 81, pp 155–166. https://doi.org/10.1007/s10472-017-9541-2

svmplus documentation built on April 25, 2018, 5:05 p.m.