IntervalRegressionInternal: IntervalRegressionInternal

Description Usage Arguments Value Author(s)

Description

Solve the squared hinge loss interval regression problem for one regularization parameter: w* = argmin_w L(w) + regularization * ||w||_1 where L(w) is the average squared hinge loss with respect to the targets, and ||w||_1 is the L1-norm of the weight vector (excluding the first element, which is the un-regularized intercept or bias term). This function performs no scaling of input features, and is meant for internal use only! To learn a regression model, try IntervalRegressionCV or IntervalRegressionUnregularized.

Usage

1
2
3
4
5
6
7
8
IntervalRegressionInternal(features, 
    targets, initial.param.vec, 
    regularization, threshold = 0.001, 
    max.iterations = 1000, 
    weight.vec = NULL, 
    Lipschitz = NULL, 
    verbose = 2, margin = 1, 
    biggest.crit = 100)

Arguments

features

Scaled numeric feature matrix (problems x features). The first column/feature should be all ones and will not be regularized.

targets

Numeric target matrix (problems x 2).

initial.param.vec

initial guess for weight vector (features).

regularization

Degree of L1-regularization.

threshold

When the stopping criterion gets below this threshold, the algorithm stops and declares the solution as optimal.

max.iterations

If the algorithm has not found an optimal solution after this many iterations, increase Lipschitz constant and max.iterations.

weight.vec

A numeric vector of weights for each training example.

Lipschitz

A numeric scalar or NULL, which means to compute Lipschitz as the mean of the squared L2-norms of the rows of the feature matrix.

verbose

Cat messages: for restarts and at the end if >= 1, and for every iteration if >= 2.

margin

Margin size hyper-parameter, default 1.

biggest.crit

Restart FISTA with a bigger Lipschitz (smaller step size) if crit gets larger than this.

Value

Numeric vector of scaled weights w of the affine function f_w(X) = X %*% w for a scaled feature matrix X with the first row entirely ones.

Author(s)

Toby Dylan Hocking


penaltyLearning documentation built on July 1, 2020, 10:26 p.m.