subgradient: subgradient

Description Usage Arguments Details Value

Description

This function obtains the minimum-norm subgradient of the approximated square error with L1 norm penalty or L2 norm penalty.

Usage

1
subgradient(w, X, y, nHidden, lambda, lambda2)

Arguments

w

(numeric, n) weights and biases.

X

(numeric, n x p) incidence matrix.

y

(numeric, n) the response data-vector.

nHidden

(positive integer, 1 x h) matrix, h indicates the number of hidden-layers and nHidden[1,h] indicates the neurons of the h-th hidden-layer.

lambda

(numeric,n) lagrange multiplier for L1 norm penalty on parameters.

lambda2

(numeric,n) lagrange multiplier for L2 norm penalty on parameters.

Details

It is based on choosing a subgradient with minimum norm as a steepest descent direction and taking a step resembling Newton iteration in this direction with a Hessian approximation.

Value

A vector with the subgradient values.


snnR documentation built on May 2, 2019, 8:54 a.m.

Related to subgradient in snnR...