fletcher_powell: Davidon-Fletcher-Powell Method In pracma: Practical Numerical Math Functions

Description

Davidon-Fletcher-Powell method for function minimization.

The Davidon-Fletcher-Powell (DFP) and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) methods are the first quasi-Newton minimization methods developed. These methods differ only in some details; in general, the BFGS approach is more robust.

Usage

 ```1 2``` ```fletcher_powell(x0, f, g = NULL, maxiter = 1000, tol = .Machine\$double.eps^(2/3)) ```

Arguments

 `x0` start value. `f` function to be minimized. `g` gradient function of `f`; if `NULL`, a numerical gradient will be calculated. `maxiter` max. number of iterations. `tol` relative tolerance, to be used as stopping rule.

Details

The starting point is Newton's method in the multivariate case, when the estimate of the minimum is updated by the following equation

x_{new} = x - H^{-1}(x) grad(g)(x)

The basic idea is to generate a sequence of good approximations to the inverse Hessian matrix, in such a way that the approximations are again positive definite.

Value

List with following components:

 `xmin` minimum solution found. `fmin` value of `f` at minimum. `niter` number of iterations performed.

Note

Used some Matlab code as described in the book “Applied Numerical Analysis Using Matlab” by L. V.Fausett.

References

J. F. Bonnans, J. C. Gilbert, C. Lemarechal, and C. A. Sagastizabal. Numerical Optimization: Theoretical and Practical Aspects. Second Edition, Springer-Verlag, Berlin Heidelberg, 2006.

`steep_descent`
 ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14``` ```## Rosenbrock function rosenbrock <- function(x) { n <- length(x) x1 <- x[2:n] x2 <- x[1:(n-1)] sum(100*(x1-x2^2)^2 + (1-x2)^2) } fletcher_powell(c(0, 0), rosenbrock) # \$xmin # [1] 1 1 # \$fmin # [1] 1.774148e-27 # \$niter # [1] 14 ```