Description Usage Arguments Details Value Author(s) References See Also Examples
An algorithm for generalpurpose unconstrained nonlinear optimization. The algorithm is of quasiNewton type with BFGS updating of the inverse Hessian and soft line search with a trust region type monitoring of the input to the line search algorithm. The interface of ‘ucminf’ is designed for easy interchange with ‘optim’.
1 
par 
Initial estimate of minimum for 
fn 
Objective function to be minimized. 
gr 
Gradient of objective function. If 
... 
Optional arguments passed to the objective and gradient functions. 
control 
A list of control parameters. See ‘Details’. 
hessian 
Integer value:
If a 
The algorithm is documented in (Nielsen, 2000) (see References below) together with a comparison to the Fortran subroutine ‘MINF’ and the Matlab function ‘fminunc’. The implementation of ‘ucminf’ in R uses the original Fortran version of the algorithm.
The interface in R is designed so that it is very easy to switch
between using ‘ucminf’ and ‘optim’. The
arguments par
, fn
, gr
, and hessian
are all the same (with a few extra options for hessian
in
‘ucminf’). The difference is that there is no method
argument in ‘ucminf’ and that some of the components in the
control
argument are different due to differences in the
algorithms.
The algorithm can be given an initial estimate of the Hessian for the optimization and it is possible to get the final approximation of the Hessian based on the series of BFGS updates. This extra functionality may be useful for optimization in a series of related problems.
The functions fn
and gr
can return Inf
or NaN
if the functions cannot be evaluated at the supplied value, but the
functions must be computable at the initial value. The functions
are not allowed to return NA
. Any names given to par
will be
copied to the vectors passed to fn
and gr
. No
other attributes of par
are copied over.
The control
argument is a list that can supply any of the
following components:
trace
If trace is positive then detailed tracing information is printed for each iteration.
grtol
The algorithm stops when
F'(x)_inf <= grtol, that
is when the largest absolute value of the gradient is less than
grtol. Default value is grtol = 1e6
.
xtol
The algorithm stops when xx_p_2 <=
xtol*(xtol + x_2), where x_p and x are the
previous and current estimate of the minimizer. Thus the algorithm
stops when the last relative step length is
sufficiently small. Default value is xtol = 1e12
.
stepmax
Initial maximal allowed step length (radius of
trustregion). The value is updated during the
optimization. Default value is stepmax = 1
.
maxeval
The maximum number of function evaluations. A function evaluation
is counted as one evaluation of the objective function and of the
gradient function. Default value is maxeval = 500
.
grad
Either ‘forward’ or ‘central’. Controls
the type of finite difference approximation to be used for the
gradient if no gradient function is given in the input argument
‘gr’. Default value is grad = 'forward'
.
gradstep
Vector of length 2. The step length in finite
difference approximation for the gradient. Step length is
x_i*gradstep[1]+gradstep[2].
Default value is gradstep = c(1e6, 1e8)
.
invhessian.lt
A vector with an initial approximation to the lower triangle of the
inverse Hessian. If not given, the inverse Hessian is initialized
as the identity matrix. If H0
is the initial hessian matrix then
the lower triangle of the inverse of H0
can be found as
invhessian.lt = solve(H0)[lower.tri(H0,diag=TRUE)]
.
par 
Computed minimizer. 
value 
Objective function value at computed minimizer. 
convergence 
Flag for reason of termination:

message 
String with reason of termination. 
hessian, invhessian 
Estimate of (inv.) Hessian at computed minimizer. The type of estimate is given by the input argument ‘hessian’. 
invhessian.lt 
The lower triangle of the final approximation to the inverse Hessian based on the series of BFGS updates during optimization. 
info 
Information about the search:

‘UCMINF’ algorithm design and Fortran code by Hans Bruun Nielsen.
Implementation in R by Stig B. Mortensen, [email protected].
Nielsen, H. B. (2000) ‘UCMINF  An Algorithm For Unconstrained, Nonlinear Optimization’, Report IMMREP200018, Department of Mathematical Modelling, Technical University of Denmark. http://www2.imm.dtu.dk/~hbn/publ/TR0019.ps or http://orbit.dtu.dk/recid/200975.
The original Fortran source code can be found at http://www2.imm.dtu.dk/~hbn/Software/ucminf.f. The code has been slightly modified in this package to be suitable for use with R.
The general structure of the implementation in R is based on the package ‘FortranCallsR’ by Diethelm Wuertz.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32  ## Rosenbrock Banana function
fR = function(x) { (1x[1])^2+100*(x[2]x[1]^2)^2 }
gR = function(x) { c( 400*x[1]*(x[2]x[1]*x[1])  2*(1x[1]),
200*(x[2]x[1]*x[1])) }
# Find minimum
ucminf(par = c(2,.5), fn = fR, gr=gR)
# Compare hessian approximations
ucminf(par = c(2,.5), fn = fR, gr=gR, hessian=1)$hessian
ucminf(par = c(2,.5), fn = fR, gr=gR, hessian=3)$hessian
# Compare run times with optim's BFGS method
# (chosen convergence criteria result in similar accuracy)
system.time( for(i in 1:500)
ucminf(par = c(2,0.5), fn = fR, gr=gR)
)
system.time( for(i in 1:500)
optim(par = c(2,0.5), fn = fR, gr=gR,method='BFGS')
)
## Quadratic function
fQ = function(x) { sum((4*x1)^2) }
gQ = function(x) { 32*x 8}
# Find minimum with too small stepmax and print trace
ucminf(par = c(20.5,20.0), fn = fQ, gr = gQ,
control=list(stepmax=1,trace=TRUE))
# The same again with a larger stepmax
ucminf(par = c(20.5,20.0), fn = fQ, gr = gQ,
control=list(stepmax=100,trace=TRUE))

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.