gelnet.ker: Kernel models for linear regression, binary classification...

Description Usage Arguments Details Value See Also

View source: R/gelnet.R

Description

Infers the problem type and learns the appropriate kernel model.

Usage

1
2
3
gelnet.ker(K, y, lambda, a, max.iter = 100, eps = 1e-05, v.init = rep(0,
  nrow(K)), b.init = 0, fix.bias = FALSE, silent = FALSE,
  balanced = FALSE)

Arguments

K

n-by-n matrix of pairwise kernel values over a set of n samples

y

n-by-1 vector of response values. Must be numeric vector for regression, factor with 2 levels for binary classification, or NULL for a one-class task.

lambda

scalar, regularization parameter

a

n-by-1 vector of sample weights (regression only)

max.iter

maximum number of iterations (binary classification and one-class problems only)

eps

convergence precision (binary classification and one-class problems only)

v.init

initial parameter estimate for the kernel weights (binary classification and one-class problems only)

b.init

initial parameter estimate for the bias term (binary classification only)

fix.bias

set to TRUE to prevent the bias term from being updated (regression only) (default: FALSE)

silent

set to TRUE to suppress run-time output to stdout (default: FALSE)

balanced

boolean specifying whether the balanced model is being trained (binary classification only) (default: FALSE)

Details

The entries in the kernel matrix K can be interpreted as dot products in some feature space φ. The corresponding weight vector can be retrieved via w = ∑_i v_i φ(x_i). However, new samples can be classified without explicit access to the underlying feature space:

w^T φ(x) + b = ∑_i v_i φ^T (x_i) φ(x) + b = ∑_i v_i K( x_i, x ) + b

The method determines the problem type from the labels argument y. If y is a numeric vector, then a ridge regression model is trained by optimizing the following objective function:

\frac{1}{2n} ∑_i a_i (z_i - (w^T x_i + b))^2 + w^Tw

If y is a factor with two levels, then the function returns a binary classification model, obtained by optimizing the following objective function:

-\frac{1}{n} ∑_i y_i s_i - \log( 1 + \exp(s_i) ) + w^Tw

where

s_i = w^T x_i + b

Finally, if no labels are provided (y == NULL), then a one-class model is constructed using the following objective function:

-\frac{1}{n} ∑_i s_i - \log( 1 + \exp(s_i) ) + w^Tw

where

s_i = w^T x_i

In all cases, w = ∑_i v_i φ(x_i) and the method solves for v_i.

Value

A list with two elements:

v

n-by-1 vector of kernel weights

b

scalar, bias term for the linear model (omitted for one-class models)

See Also

gelnet


gelnet documentation built on May 2, 2019, 2:10 p.m.