shor: Shor's R Algorithm

Description Usage Arguments Value Author(s) See Also Examples

View source: R/shor.R

Description

Shor's R algorithm for unconstrained minimization of smooth and non smooth functions.

Usage

1
2
3
4
5
shor(fn, gr,nvar = 0, nstart = 10, x0 = NULL,upper = 1, lower = 0, 
    maxit = 1000, fvalquit = -Inf, beta = 0.5, normtol = 1e-06, xnormquit = Inf,
    evaldist = 1e-04, ngrad = 0, rescale = 0,
    strongwolfe = 0, useprevstep = 0, wolfe1 = 1e-04,
    wolfe2 = 0.5, quitLSfail = TRUE, prtlevel = 1)

Arguments

fn

A function to be minimized. fn(x) takes input as a vector of parameters over which minimization is to take place. fn() returns a scaler.

gr

A function to return the gradient for fn(x).

nvar

Number of parameters that fn() takes.

nstart

Number of initial guesses. Default is 10.

x0

Matrix, with dimension (nvar x nstart), each column represent an initial guess.

upper

upper bound for the initial value

lower

lower bound for the initial value

maxit

maximum number of iterations.

fvalquit

quit if f drops below this value.

beta

the key parameter, between 0 and 1, or nan to minimize the error in the secant equation each iteration (default 1/2) beta = 1: steepest descent; beta -> 0: conjugate gradient (beta = 0 will give divide by 0) note: beta = 1 - gamma, where gamma is notation in IMAJNA paper

normtol

termination tolerance on d: smallest vector in convex hull of up to ngrad gradients (default: 1e-6)

xnormquit

quit if norm(x) drops below this.

evaldist

the gradients used in the termination test qualify only if they are evaluated at points approximately within distance options.evaldist of x

ngrad

number of gradients willing to save and use in solving QP to check optimality tolerance on smallest vector in their convex hull; see also next two options

rescale

1 if rescale B to have inf norm 1 every iteration

strongwolfe

if this is 1, strong line search is used, otherwise, weak line search is used. Default is 0.

useprevstep

if 1, line search is initialized with previous steps. 1 seemed to perform better, but hard to justify this rationally.

wolfe1

wolfe line search parameter 1.

wolfe2

wolfe line search parameter 2.

quitLSfail

1 if want to quit when line search fails. 0 otherwise.

prtlevel

if 1, prints in code messages.

Value

Returns a list containing the following fields:

x

a matrix with k'th column containing final value of x obtained from k'th column of x0.

f

a vector of final obtained minimum values of fn() at the initial points.

g

each column is the gradient at the corresponding column of x

B

list of the final shor matrices

frec

record of function evaluations

betarec

record of beta

xrec

record of iterates

svrec

record of singular value's of the Hessian

Author(s)

Copyright (c) 2010 Michael Overton for Matlab code and documentation, with permission converted to R by Abhirup Mallik (and Hans W Borchers).

See Also

hanso, gradsamp, nlcg, bfgs

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
nesterov_f <- function(x) {
    x1 <- x[1]; x2 <- x[2]
    1/4*(x1-1)^2 + abs(x2 - 2*x1^2+1)
 }

nesterov_g <- function(x) {     # analytical gradient
    g <- matrix(NA, 2, 1)
    x1 <- x[1]; x2 <- x[2]
    sgn <- sign(x2 - 2*x1^2 + 1)
    if (sgn != 0) {
        g[1] <- 0.5*(x1-1) - sgn*4*x1
        g[2] <- sgn
    }
    g
 }

rHanso documentation built on May 2, 2019, 5:26 p.m.