steep_descent: Steepest Descent Minimization In pracma: Practical Numerical Math Functions

Description

Function minimization by steepest descent.

Usage

 1 2 steep_descent(x0, f, g = NULL, info = FALSE, maxiter = 100, tol = .Machine\$double.eps^(1/2))

Arguments

 x0 start value. f function to be minimized. g gradient function of f; if NULL, a numerical gradient will be calculated. info logical; shall information be printed on every iteration? maxiter max. number of iterations. tol relative tolerance, to be used as stopping rule.

Details

Steepest descent is a line search method that moves along the downhill direction.

Value

List with following components:

 xmin minimum solution found. fmin value of f at minimum. niter number of iterations performed.

Note

Used some Matlab code as described in the book “Applied Numerical Analysis Using Matlab” by L. V.Fausett.

References

Nocedal, J., and S. J. Wright (2006). Numerical Optimization. Second Edition, Springer-Verlag, New York, pp. 22 ff.

Examples

 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ## Rosenbrock function: The flat valley of the Rosenbruck function makes ## it infeasible for a steepest descent approach. # rosenbrock <- function(x) { # n <- length(x) # x1 <- x[2:n] # x2 <- x[1:(n-1)] # sum(100*(x1-x2^2)^2 + (1-x2)^2) # } # steep_descent(c(1, 1), rosenbrock) # Warning message: # In steep_descent(c(0, 0), rosenbrock) : # Maximum number of iterations reached -- not converged. ## Sphere function sph <- function(x) sum(x^2) steep_descent(rep(1, 10), sph) # \$xmin 0 0 0 0 0 0 0 0 0 0 # \$fmin 0 # \$niter 2

Example output

\$xmin
 0 0 0 0 0 0 0 0 0 0

\$fmin
 0

\$niter
 2

pracma documentation built on Dec. 11, 2021, 9:57 a.m.