Description Usage Arguments Details Value Source References
General-purpose optimization wrapper function that calls other
R tools for optimization, including the existing optim() function.
optim also tries to unify the calling sequence to allow
a number of tools to use the same front-end. These include 
spg from the BB package, ucminf, nlm, and 
nlminb. Note that 
optim() itself allows Nelder–Mead, quasi-Newton and 
conjugate-gradient algorithms as well as box-constrained optimization 
via L-BFGS-B. Because SANN does not return a meaningful convergence code
(conv), optimz::optim() does not call the SANN method.
| 1 2 3 4 | 
| par | a vector of initial values for the parameters for which optimal values are to be found. Names on the elements of this vector are preserved and used in the results data frame. | 
| fn | A function to be minimized (or maximized), with first argument the vector of parameters over which minimization is to take place. It should return a scalar result. | 
| gr | A function to return (as a vector) the gradient for those methods that can use this information. If 'gr' is  | 
| lower, upper | Bounds on the variables for methods such as  | 
| method | A list of the methods to be used. Note that this is an important change from optim() that allows just one method to be specified. See ‘Details’. The default of NULL causes an appropriate set of methods to be supplied depending on the presence or absence of bounds on the parameters. The default unconstrained set is Rvmminu, Rcgminu, lbfgsb3, newuoa and nmkb. The default bounds constrained set is Rvmminb, Rcgminb, lbfgsb3, bobyqa and nmkb. | 
| hessian | A logical control that if TRUE forces the computation of an approximation 
to the Hessian at the final set of parameters. If FALSE (default), the hessian is
calculated if needed to provide the KKT optimality tests (see  | 
| control | A list of control parameters. See ‘Details’. | 
| ... | For  | 
Note that arguments after ... must be matched exactly.
By default this function performs minimization, but it will maximize
if control$maximize is TRUE. The original optim() function allows
control$fnscale to be set negative to accomplish this. DO NOT
use both methods. 
Possible method codes are 'Nelder-Mead', 'BFGS', 'CG', 'L-BFGS-B', 'nlm', 'nlminb', 'spg', 'ucminf', 'newuoa', 'bobyqa', 'nmkb', 'hjkb', 'Rcgmin', 'lbfgsb3' or 'Rvmmin'. These are in base R or in CRAN repositories. From R-forge, method 'Rtnmin' is available. Other methods are likely to be added over time.
The default methods for unconstrained problems (no lower or
upper specified) are an implementation of the Nelder and Mead
(1965) and a Variable Metric method based on the ideas of Fletcher
(1970) as modified by him in conversation with Nash (1979). Nelder-Mead
uses only function values and is robust but relatively slow.  It will 
work reasonably well for non-differentiable functions. The Variable
Metric method, "BFGS" updates an approximation to the inverse
Hessian using the BFGS update formulas, along with an acceptable point
line search strategy. This method appears to work best with analytic
gradients. ("Rvmmmin" provides a box-constrained version of this
algorithm.
If no method is given, and there are bounds constraints provided,
the method is set to "L-BFGS-B".
Method "CG" is a conjugate gradients method based on that by
Fletcher and Reeves (1964) (but with the option of Polak–Ribiere or
Beale–Sorenson updates). The particular implementation is now dated,
and improved yet simpler codes have been implemented. Furthermore, 
"Rcgmin" allows box constraints as well as fixed (masked)
parameters. Conjugate gradient methods will generally be more fragile 
than the BFGS method, but as they do not store a matrix they may be 
successful in optimization problems with a large number of parameters.
Method "L-BFGS-B" is that of Byrd et. al. (1995) which
allows box constraints, that is each variable can be given a lower
and/or upper bound. The initial value must satisfy the constraints.
This uses a limited-memory modification of the BFGS quasi-Newton
method. If non-trivial bounds are supplied, this method is selected
by the original optim() function, with a warning. Unfortunately,
the authors of the original Fortran version of this method released a
correction for bugs in 2011, but these have not been incorporated into
the distributed R codes, which are a C translation of a version that
appears to be from the mid-1990s. While it seems the errors affect 
very few computations, users may wish to use the Fortran codes in 
package lbfgsb3. 
Nocedal and Wright (1999) is a comprehensive reference for the previous three methods.
Function fn can return NA or Inf if the function
cannot be evaluated at the supplied value, but the initial value must
have a computable finite value of fn. However, some methods, of
which "L-BFGS-B" is known to be a case, require that the values
returned should always be finite.
While optim can be used recursively, and for a single parameter
as well as many, this may not be true for optimx. optim
also accepts a zero-length par, and just evaluates the function 
with that argument.
Method "nlm" is from the package of the same name that implements
ideas of Dennis and Schnabel (1983) and Schnabel et al. (1985). See nlm()
for more details.
Method "nlminb" is the package of the same name that uses the
minimization tools of the PORT library.  The PORT documentation is at 
<URL: http://netlib.bell-labs.com/cm/cs/cstr/153.pdf>. See nlminb()
for details. (Though there is very little information about the methods.)
Method "spg" is from package BB implementing a spectral projected 
gradient method for large-scale optimization with simple constraints due
R adaptation, with significant modifications, by Ravi Varadhan,
Johns Hopkins University (Varadhan and Gilbert, 2009), from the original
FORTRAN code of Birgin, Martinez, and Raydan (2001). 
Method "Rcgmin" is from the package of that name. It implements a
conjugate gradient algorithm with the Yuan/Dai update (ref??) and also 
allows bounds constraints on the parameters. (Rcgmin also allows mask 
constraints – fixing individual parameters – but there is as yet no 
interface from "optimr".) 
Method "Rvmmin" is from the package of that name. It implements 
the same variable metric method as the base optim() function with method
"BFGS" but allows bounds constraints on the parameters. (Rvmmin 
also allows mask constraints – fixing individual parameters – but 
there is as yet no interface from "optimr".) 
Method "Rtnmin" is from the package of that name. It implements a
truncated Newton method of Stephen Nash translated from Matlab. It 
allows bounds constraints on the parameters. 
Methods "bobyqa", "uobyqa" and "newuoa" are from the 
package "minqa" which implement optimization by quadratic approximation
routines of the similar names due to M J D Powell (2009). See package minqa 
for details. Note that "uobyqa" and "newuoa" are for 
unconstrained minimization, while "bobyqa" is for box constrained
problems. While "uobyqa" may be specified, it is NOT part of the 
all.methods = TRUE set.
Methods "nmkb" and "hjkb" are from package dfoptim. They 
implement respectively variants of the Nelder-Mead and Hooke and Jeeves
derivative-free methods, but both allow bounds constraints. However, it is
important to note that "nmkb" must NOT have starting parameters on
a lower or upper bound, as a transformation of the paramters is used to
effect the constraints. 
The control argument is a list that can supply any of the
following components:
traceNon-negative integer. If positive,
tracing information on the
progress of the optimization is produced. Higher values may
produce more tracing information: for method "L-BFGS-B"
there are six levels of tracing. trace = 0 gives no output 
(To understand exactly what these do see the source code: higher 
levels give more detail.)
follow.on = TRUE or FALSE. If TRUE, and there are multiple methods, then the last set of parameters from one method is used as the starting set for the next.
save.failures= TRUE if we wish to keep "answers" from runs where the method does not return convcode==0. FALSE otherwise (default).
maximize = TRUE if we want to maximize rather than minimize 
a function. (Default FALSE). Methods nlm, nlminb, ucminf cannot maximize a
function, so the user must explicitly minimize and carry out the adjustment
externally. However, there is a check to avoid
usage of these codes when maximize is TRUE. See fnscale below for 
the method used in optim that we deprecate.
all.methods= TRUE if we want to use all available (and suitable) methods.
kkt=FALSE if we do NOT want to test the Kuhn, Karush, Tucker
optimality conditions. The default is TRUE. However, because the Hessian
computation may be very slow, we set kkt to be FALSE if there are 
more than than 50 parameters when the gradient function gr is not 
provided, and more than 500
parameters when such a function is specified. We return logical values KKT1
and KKT2 TRUE if first and second order conditions are satisfied approximately.
Note, however, that the tests are sensitive to scaling, and users may need
to perform additional verification. If kkt is FALSE but hessian
is TRUE, then KKT1 is generated, but KKT2 is not.
all.methods= TRUE if we want to use all available (and suitable) methods.
kkttol= value to use to check for small gradient and negative Hessian eigenvalues. Default = .Machine$double.eps^(1/3)
kkt2tol= Tolerance for eigenvalue ratio in KKT test of positive definite Hessian. Default same as for kkttol
starttests= TRUE if we want to run tests of the function and parameters: feasibility relative to bounds, analytic vs numerical gradient, scaling tests, before we try optimization methods. Default is TRUE.
dowarn= TRUE if we want warnings generated by optimx. Default is TRUE.
badval= The value to set for the function value when try(fn()) fails. Default is (0.5)*.Machine$double.xmax
usenumDeriv= TRUE if the numDeriv function grad() is
to be used to compute gradients when the argument gr is NULL or not supplied.
The following control elements apply only to some of the methods. The list
may be incomplete. See individual packages for details. 
fnscaleAn overall scaling to be applied to the value
of fn and gr during optimization. If negative,
turns the problem into a maximization problem. Optimization is
performed on fn(par)/fnscale. For methods from the set in
optim(). Note potential conflicts with the control maximize.
parscaleA vector of scaling values for the parameters.
Optimization is performed on par/parscale and these should be
comparable in the sense that a unit change in any element produces
about a unit change in the scaled value.For optim.
ndepsA vector of step sizes for the finite-difference
approximation to the gradient, on par/parscale
scale. Defaults to 1e-3. For optim.
maxitThe maximum number of iterations. Defaults to
100 for the derivative-based methods, and
500 for "Nelder-Mead".
abstolThe absolute convergence tolerance. Only useful for non-negative functions, as a tolerance for reaching zero.
reltolRelative convergence tolerance.  The algorithm
stops if it is unable to reduce the value by a factor of
reltol * (abs(val) + reltol) at a step.  Defaults to
sqrt(.Machine$double.eps), typically about 1e-8. For optim.
alpha, beta, gammaScaling parameters
for the "Nelder-Mead" method. alpha is the reflection
factor (default 1.0), beta the contraction factor (0.5) and
gamma the expansion factor (2.0).
REPORTThe frequency of reports for the "BFGS" and
"L-BFGS-B" methods if control$trace
is positive. Defaults to every 10 iterations for "BFGS" and
"L-BFGS-B".
typefor the conjugate-gradients method. Takes value
1 for the Fletcher–Reeves update, 2 for
Polak–Ribiere and 3 for Beale–Sorenson.
lmmis an integer giving the number of BFGS updates
retained in the "L-BFGS-B" method, It defaults to 5.
factrcontrols the convergence of the "L-BFGS-B"
method. Convergence occurs when the reduction in the objective is
within this factor of the machine tolerance. Default is 1e7,
that is a tolerance of about 1e-8.
pgtolhelps control the convergence of the "L-BFGS-B"
method. It is a tolerance on the projected gradient in the current
search direction. This defaults to zero, when the check is
suppressed.
Any names given to par will be copied to the vectors passed to
fn and gr.  Note that no other attributes of par
are copied over. (We have not verified this as at 2009-07-29.)
For ‘optim’, a list with components:
| par | The best set of parameters found. | 
| value | The value of ‘fn’ corresponding to ‘par’. | 
| counts | A two-element integer vector giving the number of calls to ‘fn’ and ‘gr’ respectively. This excludes those calls needed to compute the Hessian, if requested, and any calls to ‘fn’ to compute a finite-difference approximation to the gradient. | 
| convergence | An integer code. ‘0’ indicates successful completion | 
|  message | A character string giving any additional information returned by the optimizer, or ‘NULL’. | 
| hessian | Always NULL for this routine. | 
See the manual pages for optim() and the packages the DESCRIPTION suggests.
See the manual pages for optim() and the packages the DESCRIPTION suggests.
Nash JC, and Varadhan R (2011). Unifying Optimization Algorithms to Aid Software System Users: optimx for R., Journal of Statistical Software, 43(9), 1-14., URL http://www.jstatsoft.org/v43/i09/.
?? Nocedal
?? Yuan and Dai
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.