mxComputeGradientDescent: Optimize parameters using a gradient descent optimizer

Description Usage Arguments Details References Examples

View source: R/MxCompute.R

Description

This optimizer does not require analytic derivatives of the fit function. The fully open-source CRAN version of OpenMx offers 2 choices, SLSQP (from the NLOPT collection) and CSOLNP. The OpenMx Team's version of OpenMx offers the choice of three optimizers: SLSQP, CSOLNP, and NPSOL.

Usage

1
2
3
4
5
6
7
mxComputeGradientDescent(freeSet = NA_character_, ..., engine = NULL,
  fitfunction = "fitfunction", verbose = 0L, tolerance = NA_real_,
  useGradient = NULL, warmStart = NULL, nudgeZeroStarts = mxOption(NULL,
  "Nudge zero starts"), maxMajorIter = NULL, gradientAlgo = mxOption(NULL,
  "Gradient algorithm"),
  gradientIterations = imxAutoOptionValue("Gradient iterations"),
  gradientStepSize = imxAutoOptionValue("Gradient step size"))

Arguments

freeSet

names of matrices containing free parameters.

...

Not used. Forces remaining arguments to be specified by name.

engine

specific 'NPSOL', 'SLSQP', or 'CSOLNP'

fitfunction

name of the fitfunction (defaults to 'fitfunction')

verbose

level of debugging output

tolerance

how close to the optimum is close enough (also known as the optimality tolerance)

useGradient

whether to use the analytic gradient (if available)

warmStart

a Cholesky factored Hessian to use as the NPSOL Hessian starting value (preconditioner)

nudgeZeroStarts

whether to nudge any zero starting values prior to optimization (default TRUE)

maxMajorIter

maximum number of major iterations

gradientAlgo

one of c('forward','central')

gradientIterations

number of Richardson iterations to use for the gradient

gradientStepSize

the step size for the gradient

Details

One of the most important options for SLSQP is gradientAlgo. By default, the central method is used. This method requires 2 times gradientIterations function evaluations per parameter per gradient. The central method can be much more accurate than the forward method, which requires gradientIterations function evaluations per parameter per gradient. The forward method is faster, and often works well enough, but can result in imprecise gradient estimations that may not allow SLSQP to fully optimize a given model, possibly resulting in code RED (status code 5 or 6).

Currently, only SLSQP uses arguments gradientIterations and gradientAlgo. CSOLNP always uses the forward method; NPSOL usually uses the forward method, but adaptively switches to central under certain circumstances.

CSOLNP uses the value of argument gradientStepSize as-is, whereas SLSQP internally scales it by a factor of 100. The purpose of this transformation is to obtain roughly the same accuracy given other differences in numerical procedure. NPSOL ignores gradientStepSize, and instead uses a function of mxOption “Function precision” to determine its gradient step size.

Currently, only SLSQP and NPSOL can use analytic gradients, and only NPSOL uses warmStart.

References

Luenberger, D. G. & Ye, Y. (2008). Linear and nonlinear programming. Springer.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
data(demoOneFactor)
factorModel <- mxModel(name ="One Factor",
  mxMatrix(type="Full", nrow=5, ncol=1, free=FALSE, values=0.2, name="A"),
    mxMatrix(type="Symm", nrow=1, ncol=1, free=FALSE, values=1, name="L"),
    mxMatrix(type="Diag", nrow=5, ncol=5, free=TRUE, values=1, name="U"),
    mxAlgebra(expression=A %*% L %*% t(A) + U, name="R"),
  mxExpectationNormal(covariance="R", dimnames=names(demoOneFactor)),
  mxFitFunctionML(),
    mxData(observed=cov(demoOneFactor), type="cov", numObs=500),
     mxComputeSequence(steps=list(
     mxComputeGradientDescent(),
     mxComputeNumericDeriv(),
     mxComputeStandardError(),
     mxComputeHessianQuality()
    )))
factorModelFit <- mxRun(factorModel)
factorModelFit$output$conditionNumber # 29.5

OpenMx documentation built on Nov. 17, 2017, 6:50 a.m.