# hanso: HANSO: Hybrid Algorithm for Nonsmooth Optimization In rHanso: An R Implementation of Hybrid Algorithm for Non-Smooth Optimization (HANSO)

## Description

Minimization algorithm intended for nonsmooth, nonconvex functions, but also applicable to functions that are smooth, convex or both.

## Usage

 1 2 3 4 5 6 hanso(fn, gr, x0 = NULL,upper = 1, lower = 0, nvar = 0, nstart = 10, maxit = 1000, maxitgs=100, normtol = 1e-06, fvalquit = -Inf, xnormquit = Inf, nvec = 0, prtlevel = 1, strongwolfe = 0, wolfe1 = 1e-04, wolfe2 = 0.5, quitLSfail = 1, ngrad = min(100, 2 * nvar, nvar + 10), evaldist = 1e-04, H0 = diag(nvar), scale = 1, samprad = c(1e-04, 1e-05, 1e-06))

## Details

It is a two phase process, BFGS phase works for smooth functions, gradient sampling phase works for non smooth ones. Gradient sampling uses the minimum point found by BFGS as its starting point.

BFGS phase: BFGS is run from multiple starting points, taken from the columns of x0, if provided, and otherwise 10 points generated randomly. If the termination test was satisfied at the best point found by BFGS, HANSO terminates; otherwise, it continues to:

Gradient sampling phases: 3 gradient sampling phases are run from lowest point found, using sampling radii: 10*evaldist, evaldist, evaldist/10

## Value

Returns a list containing the following fields:

 x a matrix with k'th column containing final value of x obtained from k'th column of x0. f a vector of final obtained minimum values of fn() at the initial points. loc local optimality certificate, list with 2 fields: dnorm: norm of a vector in the convex hull of gradients of the function evaluated at and near x evaldist: specifies max distance from x at which these gradients were evaluated. The smaller loc\$dnorm and loc\$evaldist are, the more likely it is that x is an approximate local minimizer. H final BFGS inverse hessian approximation X iterates, where saved gradients were evaluated. G saved gradients used for computation. w weights giving the smallest vector.

## Author(s)

Copyright (c) 2010 Michael Overton for Matlab code and documentation, with permission converted to R by Abhirup Mallik (and Hans W Borchers). [email protected]

## References

A.S. Lewis and M.L. Overton, Nonsmooth Optimization via BFGS, 2008.
J.V. Burke, A.S. Lewis and M.L. Overton, A Robust Gradient Sampling Algorithm for Nonsmooth, Nonconvex Optimization SIAM J. Optimization 15 (2005), pp. 751-779