grad_descent: Gradient Descent

Description Usage Arguments Value Examples

View source: R/grad_descent.R

Description

This function uses the gradient descent algorithm (matrix form) to solve the coefficients of simple linear regression with least squares error. This is an optimization algorithm to minimize a function which updates the parameters of the model. A very common statistical and ML algorithm.

Usage

1
grad_descent(X, y, b_0, learn_rate, max_iter)

Arguments

X

matrix of all the predictors (excludes the column of 1's)

y

column vector of target (response) values

b_0

parameters of model: column vector of initialized coefficients

learn_rate

the initialized learning rate (aka step size)

max_iter

the maximum number of iterations for this algorithm

Value

the estimated coefficients/parameters

Examples

1
2
3
data(iris)
print(b <- grad_descent(X = iris[,2:4], y = iris[,1], b_0 = rep(1e-16, 4),
                        learn_rate = 2, max_iter = 2e5))

brian-d1018/bis557 documentation built on Dec. 17, 2020, 6:21 p.m.