# snewton: Safeguarded Newton methods for function minimization using R... In optimx: Expanded Replacement and Extension of the 'optim' Function

 snewton R Documentation

## Safeguarded Newton methods for function minimization using R functions.

### Description

These version of the safeguarded Newton solves the equations with the R function solve(). In `snewton` a backtracking line search is used, while in `snewtonm` we rely on a Marquardt stabilization.

### Usage

```   snewton(par, fn, gr, hess, control = list(trace=0, maxit=500), ...)

snewtonm(par, fn, gr, hess, control = list(trace=0, maxit=500), ...)
```

### Arguments

 `par` A numeric vector of starting estimates. `fn` A function that returns the value of the objective at the supplied set of parameters `par` using auxiliary data in .... The first argument of `fn` must be `par`. `gr` A function that returns the gradient of the objective at the supplied set of parameters `par` using auxiliary data in .... The first argument of `fn` must be `par`. This function returns the gradient as a numeric vector. `hess` A function to compute the Hessian matrix. This should be provided as a square, symmetric matrix. `control` An optional list of control settings. `...` Further arguments to be passed to `fn`.

### Details

Functions `fn` must return a numeric value. `gr` must return a vector. `hess` must return a matrix. The `control` argument is a list. See the code for `snewton.R` for completeness. Some of the values that may be important for users are:

trace

Set 0 (default) for no output, > 0 for diagnostic output (larger values imply more output).

watch

Set TRUE if the routine is to stop for user input (e.g., Enter) after each iteration. Default is FALSE.

maxit

A limit on the number of iterations (default 500 + 2*n where n is the number of parameters). This is the maximum number of gradient evaluations allowed.

maxfeval

A limit on the number of function evaluations allowed (default 3000 + 10*n).

eps

a tolerance used for judging small gradient norm (default = 1e-07). a gradient norm smaller than (1 + abs(fmin))*eps*eps is considered small enough that a local optimum has been found, where fmin is the current estimate of the minimal function value.

acctol

To adjust the acceptable point tolerance (default 0.0001) in the test ( f <= fmin + gradproj * steplength * acctol ). This test is used to ensure progress is made at each iteration.

stepdec

Step reduction factor for backtrack line search (default 0.2)

defstep

Initial stepsize default (default 1)

reltest

Additive shift for equality test (default 100.0)

### Value

A list with components:

xs

The best set of parameters found.

fv

The value of the objective at the best set of parameters found.

grd

The value of the gradient at the best set of parameters found. A vector.

H

The value of the Hessian at the best set of parameters found. A matrix.

niter

The number of Newton iterations used in finding the solution.

message

A message giving some information on the status of the solution.

### References

Nash, J C (1979, 1990) Compact Numerical Methods for Computers: Linear Algebra and Function Minimisation, Bristol: Adam Hilger. Second Edition, Bristol: Institute of Physics Publications.

`optim`

### Examples

```#Rosenbrock banana valley function
f <- function(x){
return(100*(x - x*x)^2 + (1-x)^2)
}
gr <- function(x){
return(c(-400*x*(x - x*x) - 2*(1-x), 200*(x - x*x)))
}
#Hessian
h <- function(x) {
a11 <- 2 - 400*x + 1200*x*x; a21 <- -400*x
return(matrix(c(a11, a21, a21, 200), 2, 2))
}

fg <- function(x){ #function and gradient
val <- f(x)
val
}
fgh <- function(x){ #function and gradient
val <- f(x)
attr(val,"hessian") <- h(x)
val
}

x0 <- c(-1.2, 1)

sr <- snewton(x0, fn=f, gr=gr, hess=h, control=list(trace=1))
print(sr)

srm <- snewtonm(x0, fn=f, gr=gr, hess=h, control=list(trace=1))
print(srm)

#Example 2: Wood function
#
wood.f <- function(x){
res <- 100*(x^2-x)^2+(1-x)^2+90*(x^2-x)^2+(1-x)^2+
10.1*((1-x)^2+(1-x)^2)+19.8*(1-x)*(1-x)
return(res)
}
wood.g <- function(x){
g1 <- 400*x^3-400*x*x+2*x-2
g2 <- -200*x^2+220.2*x+19.8*x-40
g3 <- 360*x^3-360*x*x+2*x-2
g4 <- -180*x^2+200.2*x+19.8*x-40
return(c(g1,g2,g3,g4))
}
#hessian:
wood.h <- function(x){
h11 <- 1200*x^2-400*x+2;    h12 <- -400*x; h13 <- h14 <- 0
h22 <- 220.2; h23 <- 0;    h24 <- 19.8
h33 <- 1080*x^2-360*x+2;    h34 <- -360*x
h44 <- 200.2
H <- matrix(c(h11,h12,h13,h14,h12,h22,h23,h24,
h13,h23,h33,h34,h14,h24,h34,h44),ncol=4)
return(H)
}
#################################################
w0 <- c(-3, -1, -3, -1)

wd <- snewton(w0, fn=wood.f, gr=wood.g, hess=wood.h, control=list(trace=1))
print(wd)

wdm <- snewtonm(w0, fn=wood.f, gr=wood.g, hess=wood.h, control=list(trace=1))
print(wdm)

```

optimx documentation built on May 11, 2022, 1:08 a.m.