Description Usage Arguments Details Value
This function obtains the minimum-norm subgradient of the approximated square error with L1 norm penalty or L2 norm penalty.
1 | subgradient(w, X, y, nHidden, lambda, lambda2)
|
w |
(numeric, n) weights and biases. |
X |
(numeric, n x p) incidence matrix. |
y |
(numeric, n) the response data-vector. |
nHidden |
(positive integer, 1 x h) matrix, h indicates the number of hidden-layers and nHidden[1,h] indicates the neurons of the h-th hidden-layer. |
lambda |
(numeric,n) lagrange multiplier for L1 norm penalty on parameters. |
lambda2 |
(numeric,n) lagrange multiplier for L2 norm penalty on parameters. |
It is based on choosing a subgradient with minimum norm as a steepest descent direction and taking a step resembling Newton iteration in this direction with a Hessian approximation.
A vector with the subgradient values.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.