| gauss_newton | R Documentation |
Implements a full-featured Gauss-Newton algorithm for non-linear optimization, specifically optimized for Structural Equation Modeling (SEM).
gauss_newton(
start,
objective,
residual = NULL,
gradient = NULL,
hessian = NULL,
jac = NULL,
lower = -Inf,
upper = Inf,
control = list(),
...
)
start |
Numeric vector. Starting values for the optimization parameters. |
objective |
Function. The objective function to minimize. |
residual |
Function (optional). Function that returns the residuals vector. |
gradient |
Function (optional). Gradient of the objective function. |
hessian |
Function (optional). Hessian matrix of the objective function. |
jac |
Function (optional). Jacobian matrix of the residuals. |
lower |
Numeric vector. Lower bounds for box constraints. |
upper |
Numeric vector. Upper bounds for box constraints. |
control |
List. Control parameters including convergence flags:
|
... |
Additional arguments passed to objective, gradient, and Hessian functions. |
gauss_newton is a specialized optimization algorithm for least-squares
and Maximum Likelihood problems where the objective function can be
expressed as a sum of squared residuals.
Scaling and SEM Consistency:
To ensure consistent simulation results and standard error (SE) calculations,
this implementation adjusts the Gradient (2J^T r) and the Approximate
Hessian (2J^T J) to match the scale of the Maximum Likelihood (ML)
fitting function F_{ML}. This alignment is critical when calculating
asymptotic covariance matrices using the formula \frac{2}{n} H^{-1}.
Comparison with Newton-Raphson:
Unlike newton_raphson or modified_newton, which require the full
second-order Hessian, Gauss-Newton approximates the Hessian using the
Jacobian of the residuals. This is computationally more efficient and
provides a naturally positive-semidefinite approximation, though a ridge
adjustment is still provided for numerical stability.
Ridge Adjustment Strategy:
The function includes a "Ridge Rescue" mechanism. If the approximate Hessian
is singular or poorly conditioned for Cholesky decomposition, it iteratively
adds a diagonal ridge (\tau I) until numerical stability is achieved.
A list containing optimization results and iteration metadata.
Nocedal, J., & Wright, S. J. (2006). Numerical Optimization. Springer.
Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley.
quad <- function(x) (x[1] - 2)^2 + (x[2] + 1)^2
res <- gauss_newton(start = c(0, 0), objective = quad)
print(res$par)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.