Gradient descent

Share:

Description

Use gradient descent to find local minima

Usage

1
2
3
4
5
graddsc(fp, x, h = 0.001, tol = 1e-04, m = 1000)

gradasc(fp, x, h = 0.001, tol = 1e-04, m = 1000)

gd(fp, x, h = 100, tol = 1e-04, m = 1000)

Arguments

fp

function representing the derivative of f

x

an initial estimate of the minima

h

the step size

tol

the error tolerance

m

the maximum number of iterations

Details

Gradient descent can be used to find local minima of functions. It will return an approximation based on the step size h and fp. The tol is the error tolerance, x is the initial guess at the minimum. This implementation also stops after m iterations.

Value

the x value of the minimum found

See Also

Other optimz: bisection, goldsect, hillclimbing, newton, sa, secant

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
fp <- function(x) { x^3 + 3 * x^2 - 1 }
graddsc(fp, 0)

f <- function(x) { (x[1] - 1)^2 + (x[2] - 1)^2 }
fp <-function(x) {
    x1 <- 2 * x[1] - 2
    x2 <- 8 * x[2] - 8

    return(c(x1, x2))
}
gd(fp, c(0, 0), 0.05)

Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.