SNewton: safeguarded Newton methods for function minimization

Safeguarded Newton algorithms

So-called Newton methods are among the most commonly mentioned in the solution of nonlinear equations or function minimization. However, as discussed in

https://en.wikipedia.org/wiki/Newton%27s_method#History,

the Newton or Newton-Raphson method as we know it today was not what either of its supposed originators knew.

This vignette discusses the development of simple safeguarded variants of the Newton method for function minimization in R. These are intended as learning tools, though the Marquardt stabilized version appears to be quite efficient. Note that there are some resources in R for solving nonlinear equations by Newton-like methods in the packages nleqslv and pracma. Also the base-R functions nlminb() and nlm() can make use of Hessians if provided, as can tools in the trust package.

The basic approach

If we have a function of one variable $f(x)$, with gradient $g(x)$ and second derivative (Hessian) $H(x)$ the first order condition for an extremum (min or max) is

$$g(x) = 0$$

To ensure a minimum, we want

$$ H(x) > 0 $$

The first order condition leads to a root-finding problem.

It turns out that $x$ need not be a scalar. We can consider it to be a vector of parameters to be determined. This renders $g(x)$ a vector also, and $H(x)$ a matrix. The conditions of optimality then require a zero gradient and positive-definite Hessian.

The Newton approach to such equations is to provide a guess to the root $x_t$ and to then solve the Newton equations

$$ H(x_t) * s = - g(x_t)$$

for the search vector $s$. We update $x_t$ to $x_t + s$ and repeat until we have a very small gradient $g(x_t)$. If $H(x)$ is positive definite, we have a reasonable approximation to a (local) minimum.

Motivations

A particular interest in Newton-like methods are their theoretical quadratic convergence. See https://en.wikipedia.org/wiki/Newton%27s_method. That is, the method will converge in one step for a quadratic function $f(x)$, and for "reasonable" functions will converge very rapidly. There are, however, a number of conditions, and practical programs need to include safeguards against mis-steps in the iterations. Such mis-steps occur because finite-precision floating-point arithmetic incurs errors, particularly when there are numbers of vastly different scale involved, or else implicit assumptions such as continuity of functions or their derivatives are not true.

One principal issue concerns the possibility that $H(x)$ may not be positive definite, at least in some parts of the domain, and that the curvature may be such that a unit step $x_t + s$ does not reduce the function $f$. We therefore get a number of possible variants of the approach when different possible safeguards are applied, and different methods for (possibly approximate) solution of the Newton equations are used.

Some algorithm possibilities

There are many choices we can make in building a practical code to implement the ideas above. In tandem with the two main issues expressed above, we will consider

The second choice above could be made slightly more stringent so that the Armijo condition of sufficient-decrease is met. Adding a curvature requirement gives the Wolfe condisions. See https://en.wikipedia.org/wiki/Wolfe_conditions. The Armijo requirement is generally written

$$f(x_t + steps) < f(x_t) + c * step * g(x_t)^Ts$$

where $c$ is some number less than $1$. Typically $c = 1e-4 = 0.0001$. Note that the product of gradient times search vector is negative for any reasonable situation, since we are trying to go "downhill".

A safeguarded Newton method

As a result of the ideas above, the code snewton() uses a solution of the Newton equations with the Hessian provided (if this is possible, else we stop), along with a backtracking line search, where we reduce the step size until the Armijo condition is met or terminate with the suggestion that the current $x_t$ is our solution. Note that @Hartley1961 suggested evaluating the function at $x_t + s$ and $x_t + 0.5 * s$ to provide three values that can be used for a parabolic inverse interpolation. However, a back-tracking search with acceptable point criterion is generally simpler yet still effective.

Newton-Marquardt method

A slightly different approach in the code snewtm (formerly snewtonm) uses a Marquardt stabilization of the Hessian to create

$$ H_{aug} = H + 1_n * lambda$$

That is, we add $lambda$ times the unit matrix to $H$. Then we try the set of parameters found by adding the solution of the Newton equations with $H_{aug}$ in place of $H$ to the current "best" set of parameters. If this new set of parameters has a higher function value than the "best" so far, we increase $lambda$ and try again. Note that we do not need to re-evaluate the gradient or Hessian to do this. Moreover, for some value of large value of $lambda$, the step is clearly almost down the gradient (i.e., steepest descents) or we have converged and no progress is possible. This leads to a very compact and elegant code, which we name snewtonm() for Safeguarded Newton-Marquardt method. It is reliable, but may be less efficient than using the un-modified Hessian.

Note that it is also possible to combine the Marquardt stabilization with a line search. Thus there is a multitude of possible methods in this general family, which can lead to potential disagreements about which are "best" unless there is great care taken to ensure the methods under discussion are well-defined.

In 2023, in concert with an effort related to other algorithms, a function snewtm() was created to be called ONLY via optimr(). That is, validity checks and other general steps in running an optimizer have been removed. snewtonm() was removed at this time.

Computing the search vector

If, when solving solving for $s$ in the Newton equations, the Hessian is not positive definite, we cannot apply fast and stable methods like the Cholesky decomposition. The Newton-Marquardt for sufficiently large $\lambda$ avoids this difficulty.

However, the solution often DOES work, and we can simply try to solve, indeed, wrapping the solution statement in the R try() function, and stop in the event of failure.

Choosing the step size in the safeguarded Newton method

The traditional Newton approach is that the stepsize is taken to be 1. In practice, this can sometimes mean that the function value is not reduced. As an alternative, we can use a simple backtrack search. We generally start with $step = 1$ but it is trivial to allow for a smaller or bigger value. Indeed, the control list element defstep in the program snewton allows the initial step to be set to a value other than 1.

If the Armijo condition is not met, we replace $step$ with $r * step$ where $r$ is less than 1. Here we suggest control$stepdec = 0.2. We repeat until $x_t$ satisfies the Armijo condition or $x_t$ is essentially unchanged by the step.

Here "essentially unchanged" is determined by a test using an offset value, that is, the test

$$ (x_t + offset) == (x_t + step * d + offset) $$

where $d$ is the search direction. control$offset = 100 is used. We could also, and almost equivalently, use the R identical function.

This approach has been coded into the snewton() function. Experience has shown it to be a rather poor method.

Bounds and masks constraints

In late 2021, the addition of bounds and masks constraints to snewtonm was begun. This uses the approach described in the vignette Explaining Gradient Minimizers in R. The function snewtonmb() was developed, and it was discovered that simply bypassing code for bounds allowed it to run about as quickly as the original (unconstrained) snewtonm() routine, which it now replaces, since there seems no merit in maintaining two routines.

The same ideas could be applied to snewton(), but my opinion is that the Marquardt stabilization gives snewtonm() an advantage in reliably finding solutions because the search direction is modified to guarantee a descent direction in the latter code.

Examples

These examples were coded as a test to the interim package snewton, but as at 2018-7-10 are part of the optimx package. We call these below mostly via the optimr() function, since this lets us include the "fname" attribute in the output of function proptimr(). Note that some count information on the number of hessian evaluations and "iterations" (which generally is an algorithm-specific measure) is not always provided.

A simple example

The following example is trivial, in that the Hessian is a constant matrix, and we achieve convergence immediately.


From the number of hessian evaluations, it appears nlminb() is also using the Hessian information. Note that the snewton() and snewtonm() functions return count information for iterations and hessian evaluations. optimr() also builds counts into internal scaled function, gradient and hessian functions, and these are displayed by the proptimr() compact output function.

The Rosenbrock function

Let us try our two Newton methods on the unconstrained Rosenbrock function and compare it to some other methods that claim to use the Hessian.


The Wood function

For nlm() the "standard" start takes more than 100 iterations and returns a non-optimal solution.


A generalized Rosenbrock function

There are several generalizations of the Rosenbrock function (e.g., \url{https://en.wikipedia.org/wiki/Rosenbrock_function#Multidimensional_generalisations}). The following example uses the second of the Wikipedia variants.


Note that the above example includes an illustration of how an approximate hessian may be invoked.

The Hobbs weed infestation problem

This problem is described in @cnm79. It has various nasty properties. Note that one starting point causes failure of the snewton() optimizer.


An assessment

In a number of tests, in particular using the tests in @Melville18, the 'snewton()' approach is far from satisfactory. This is likely because the search direction computed cannot adapt to find lower function values when the Hessian is near singular. In fact, I do not include this approach in the "all methods" control in the function 'optimx::opm()'.

On the other hand, 'snewtonm()' generally works reasonably well, though it is an open question whether the gain in information in using the Hessian contributes to better solutions or better efficiency in optimization.

References



Try the optimx package in your browser

Any scripts or data that you put into this service are public.

optimx documentation built on Oct. 24, 2023, 5:06 p.m.