mxComputeNelderMead | R Documentation |
OpenMx includes a flexible, options-rich implementation of the Nelder-Mead algorithm.
mxComputeNelderMead(
freeSet=NA_character_, fitfunction="fitfunction", verbose=0L,
nudgeZeroStarts=mxOption(NULL,"Nudge zero starts"),
maxIter=NULL, ...,
alpha=1, betao=0.5, betai=0.5, gamma=2, sigma=0.5, bignum=1e35,
iniSimplexType=c("regular","right","smartRight","random"),
iniSimplexEdge=1, iniSimplexMat=NULL, greedyMinimize=FALSE,
altContraction=FALSE, degenLimit=0, stagnCtrl=c(-1L,-1L),
validationRestart=TRUE,
xTolProx=1e-8, fTolProx=1e-8,
doPseudoHessian=TRUE,
ineqConstraintMthd=c("soft","eqMthd"),
eqConstraintMthd=c("GDsearch","soft","backtrack","l1p"),
backtrackCtrl=c(0.5,5),
centerIniSimplex=FALSE)
freeSet |
Character-string names of MxMatrices containing free parameters. |
fitfunction |
Character-string name of the fitfunction; defaults to 'fitfunction'. |
verbose |
Integer level of reporting printed to terminal at runtime; defaults to 0. |
nudgeZeroStarts |
Should free parameters with start values of zero be "nudged" to 0.1 at runtime? Defaults to the current global value of mxOption "Nudge zero starts". May be a logical value, or one of character strings "Yes" or "No". |
maxIter |
Integer maximum number of iterations. Value of |
... |
Not used. Forces remaining arguments to be specified by name. |
alpha |
Numeric reflection coefficient. Must be positive. Defaults to 1.0. |
betao , betai |
Numeric outside- and inside-contraction coefficients, respectively. Both must be within unit interval (0,1). Both default to 0.5. |
gamma |
Numeric expansion coefficient. If positive, must be greater than |
sigma |
Numeric shrink coefficient. Cannot exceed 1.0. If non-positive, shrink transformations will not be carried out, and failed contractions will instead be followed by a simplex restart. Defaults to 0.5. |
bignum |
Numeric value with which the fitfunction value is to be replaced if the fit is non-finite or is evaluated at infeasible parameter values. Defaults to 1e35. |
iniSimplexType |
Character string naming the method by which to construct the initial simplex from the free-parameter start values. Defaults to "regular". |
iniSimplexEdge |
Numeric edge-length of the initial simplex. Defaults to 1.0. |
iniSimplexMat |
Optional numeric matrix providing the vertices of the initial simplex. The matrix must have as many columns as there are free parameters in the MxModel. The matrix's number of rows must be no less than the number of free parameters minus the number of degrees-of-freedom gained from equality MxConstraints, if any. If a non- |
greedyMinimize |
Logical; should the optimizer use "greedy minimization?" Defaults to |
altContraction |
Logical; should the optimizer use an "alternate contraction" transformation? Defaults to |
degenLimit |
Numeric "degeneracy limit;" defaults to 0. If positive, the simplex will be restarted if the measure of the angle between any two of its edges is within 0 or pi by less than |
stagnCtrl |
"Stagnation control;" integer vector of length 2; defaults to |
validationRestart |
Logical; defaults to TRUE. |
xTolProx |
Numeric "domain-convergence" criterion; defaults to 1e-8. See below for details. |
fTolProx |
Numeric "range-convergence" criterion; defaults to 1e-8. See below for details. |
doPseudoHessian |
Logical; defaults to |
ineqConstraintMthd |
"Inequality constraint method;" character string. Defaults to "soft". |
eqConstraintMthd |
"Equality constraint method;" character string. Defaults to "GDsearch". |
backtrackCtrl |
Numeric vector of length two. See below for details. |
centerIniSimplex |
Logical. If |
The state of a Nelder-Mead optimization problem is represented by a simplex (polytope) of n+1
vertices in the space of the free parameters, where n
is the number of free parameters minus the number of degrees-of-freedom gained from equality MxConstraints. An iteration of the algorithm first sorts the n+1
vertices by their corresponding fitfunction values (i.e., the values of the fitfunction when evaluated at each vertex), in ascending order (i.e., from "best" fit to "worst" fit). Then, the "subcentroid," which is the centroid of the "best" n
vertices, is calculated. Then, the algorithm attempts to improve upon the worst fit by transforming the simplex; see Singer & Nelder (2009) for details.
Argument iniSimplexType
dictates how the initial simplex will be constructed from the start values if argument iniSimplexMat
is NULL
, and how the simplex will be re-initialized in the case of a restart. In all four cases, the vector of start values constitutes the "starting vertex" of the initial simplex. If iniSimplexType="regular"
, the initial simplex is merely a regular simplex with edge length equal to iniSimplexEdge
. A "right"
simplex is constructed by incrementing each free parameter by iniSimplexEdge
from its starting value; thus, all the edges that intersect at the starting vertex do so at right angles. A "smartRight"
simplex is constructed similarly, except that each free parameter is both incremented and decremented by iniSimplexEdge
, and of those two points the one with the smaller fitfunction value is retained as a vertex. A "random"
simplex is constructed by randomly perturbing the start values, in a manner similar to the default for mxTryHard()
, to generate the coordinates of the other vertices. The user is advised that bounds on the free parameters may keep the initial simplex from having the requested regularity or edge-length, and that iniSimplexType
is at best a suggestion in the presence of equality MxConstraints.
Note that if argument iniSimplexMat
has nonzero length, the actual start values of the MxModel's free parameters are not used as a vertex of the initial simplex (unless one of the rows of iniSimplexMat
happens to contain those start values).
If the simplex is restarted, a new simplex is constructed per argument iniSimplexType
, with edge length equal to the distance between the current best and second-best vertices, and with the current best vertex used as the "first" vertex.
If greedyMinimize=FALSE
, "greedy expansion" (Singer & Singer, 2004) is used: if the expansion point and reflection point both have smaller fitfunction values than the best vertex, the expansion point is accepted. If greedyMinimize=TRUE
, "greedy minimization" (Singer & Singer, 2004) is used: if the expansion point and the reflection point both have smaller fitfunction values than the best vertex, the better of the two new points is accepted.
If argument altContraction=TRUE
, the "modified contraction step" of Gill et al. (1982, Chapter 4) is used, and the candidate point is contracted toward the best vertex instead of toward the subcentroid.
If positive, the first element of argument stagnCtrl
sets a threshold for the number of successive iterations in which the best vertex of the simplex does not change, after which the algorithm is said to be "stagnant" (in a sense similar to that of Kelley, 1999). To attempt to remedy the stagnation, the simplex is restarted. If positive, the second element of argument stagnCtrl
sets threshold for the number of restarts conducted, beyond which stagnation no longer triggers a restart. The rationale for the second element is that the best vertex may not change for many iterations when the optimizer is close to convergence, under which circumstances restarting would be counterproductive, and in any event would require additional fitfunction evaluations.
If argument validationRestart=TRUE
, then when the optimizer has successfully converged, it will restart the simplex and attempt to improve upon the tentative solution it already found. This validation restart (Gill et al., 1982, Chapter 4) always re-initializes the simplex as a regular simplex, centered on the best vertex of the tentative solution, with edge-length equal to the distance between the best and worst vertices of the tentative solution. Optimization proceeds until convergence to a solution with a better fit value, or 2n
iterations have elapsed.
The Nelder-Mead optimizer is considered to have successfully converged if (1) the largest l-infinity norm of the vector-differences between the best vertex and the other vertices is less than argument xTolProx
, or (2) if the largest absolute difference in fit value between the best vertex and the other vertices is less than fTolProx
.
If argument doPseudoHessian=TRUE
, there are no equality MxConstraints, and the "l1p" method (see below) is not in use for inequality MxConstraints, then OpenMx will attempt to calculate the "pseudo-Hessian" or "curvature" matrix as described in the appendix to Nelder & Mead (1965). If successful, this matrix will be stored in the 'output' slot of the post-run MxComputeNelderMead object. Although crude, its inverse can be used as an estimate of the repeated-sampling covariance matrix of the free parameters when the usual finite-differences Hessian is unreliable.
OpenMx's implementation of Nelder-Mead can handle nonlinear inequality MxConstraints reasonably well. Its default method for doing so, with argument ineqConstraintMthd="soft"
, imposes a "soft" feasibility constraint by assigning a fitfunction value of bignum
to points that violate the constraints by more than mxOption 'Feasibility tolerance'. Alternately, with argument ineqConstraintMthd="eqMthd"
, inequality MxConstraints can be handled by the same method provided to argument eqConstraintMthd
, whether or not equality MxConstraints are present.
OpenMx's implementation of Nelder-Mead respects equality MxConstraints, but does not handle them especially well. Its effectiveness at handling equalities may be improved by providing a matrix to argument iniSimplexMat
that ensures all of the initial vertices are feasible. Users are warned that this Nelder-Mead implementation will not work correctly with MxModels containing redundant equality MxConstraints, and presently has no way of detecting whether any are present. If argument eqConstraintMthd="GDsearch"
(the default), then whenever Nelder-Mead evaluates the fitfunction at an infeasible point, it initiates a subsidiary optimization that uses SLSQP to find the nearest (in squared Euclidean distance) feasible point, and replaces that feasible point for the infeasible one. The user should note that the function evaluations that occur during this subsidiary optimization are counted toward the total number of fitfunction evaluations during the call to mxRun()
. The effectiveness of the 'GDsearch' method is often improved by setting mxOption 'Feasibility tolerance' to a stricter (smaller) value than the on-load default. The method specified by eqConstraintMthd="soft"
is described in the preceding paragraph. If argument eqConstraintMthd="backtrack"
, then the optimizer attempts to backtrack from an infeasible point to a feasible point in a manner similar to that of Ghiasi et al. (2008), except that it used with all new points, and not just those encountered via reflection, expansion and contraction. In this case, the displacement from the prior point to the candidate point is reduced by the proportion provided as the first element of argument backtrackCtrl
, and thus a new candidate point is considered. This process is repeated until feasibility of the candidate point is restored, or the number of attempts exceeds the second element of argument backtrackCtrl
. If argument eqConstraintMthd="l1p"
, Nelder-Mead is used as part of an l_1
-penalty algorithm. When using "l1p", the simplex gradient (Kelley, 1999) and "pseudo-Hessian" are never calculated.
Returns an object of class 'MxComputeNelderMead'.
Ghiasi, H., Pasini, D., & Lessard, L. (2008). Constrained globalized Nelder-Mead method for simultaneous structural and manufacturing optimization of a composite bracket. Journal of Composite Materials, 42(7), p. 717-736. doi: 10.1177/0021998307088592
Gill, P. E., Murray, W., & Wright, M. H. (1982). Practical Optimization. Bingley, UK: Emerald Group Publishing Ltd.
Kelley, C. T. (1999). Detection and remediation of stagnation in the Nelder-Mead algorithm using a sufficient decrease condition. SIAM Journal of Optimization 10(1), p. 43-55.
Nelder, J. A., & Mead, R. (1965) . A simplex method for function minimization. The Computer Journal, 7, p. 308-313.
Singer, S., & Nelder, J. (2009). Nelder-Mead algorithm. Scholarpedia, 4(7):2928., revision #91557. http://www.scholarpedia.org/article/Nelder-Mead_algorithm .
Singer, S., & Singer, S. (2004). Efficient implementation of the Nelder-Mead search algorithm. Applied Numerical Analysis & Computational Mathematics Journal, 1(2), p. 524-534. doi: 10.1002/anac.200410015
foo <- mxComputeNelderMead()
str(foo)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.