SteepD: Steepest Descend

View source: R/SteepestDescend.R

SteepDR Documentation

Steepest Descend

Description

\loadmathjax

Implements steepest descend method to find the coefficients \mjseqn\beta that minimize the following loss function \mjsdeqnL(\beta) = (X\beta - Y)^2 In this implementation, the stepsize is updated at each iteration employing the gradient \mjsdeqn\nabla L(\beta) = 2X^TX\beta - 2X^TY and the Hessian matrix \mjsdeqnH L(\beta) = 4X^TX

Usage

SteepD(data, init = NULL, tol = 1e-04, maxit = 1000L, verb = F, check_loss = F)

Arguments

data

list containing the data, elements must be named X and Y, where X is a n x k matrix and Y is a vector of length n. Here, n represents the number of observations and k is the number of \mjseqn\beta coefficients.

init

vector initial guesses of the parameter of interest. If NULL, values are all set equal to 1.

tol

numeric it must be strictly positive. It is the tolerance on the error evaluation between subsequent iterations. It is use to determine the stopping criteria.

maxit

integer it must be strictly positive. It is the maximum number of iterations.

verb

bool if TRUE, it prints more information about the status of the algorithm (default is FALSE).

check_loss

bool if TRUE, the algorithm stops when \mjsdeqn|| L(\beta(1) - L\beta(0))||\infty < tol otherwise, it stops when \mjsdeqn||\beta(1) - \beta(0))||_\infty < tol.

Value

list composed by : Beta_hat the \mjseqn\beta coefficient of interest : Minimum the value of the loss function at the convergence point (only if verb = TRUE) : Final_error the value of the error at the convergence point : Num_iter the number of iterations that the function used to reach the minimum : Time it is the time elapsed to perform the optimization (increased by 2 seconds to make it traceable even with small data)


lucapresicce/DescendMethods documentation built on April 26, 2022, 6 p.m.