newton_raphson: Pure Newton-Raphson Optimization

View source: R/newton_raphson.R

newton_raphsonR Documentation

Pure Newton-Raphson Optimization

Description

Implements the standard Newton-Raphson algorithm for non-linear optimization without Hessian modifications or ridge adjustments.

Usage

newton_raphson(
  start,
  objective,
  gradient = NULL,
  hessian = NULL,
  lower = -Inf,
  upper = Inf,
  control = list(),
  ...
)

Arguments

start

Numeric vector. Starting values for the optimization parameters.

objective

Function. The objective function to minimize.

gradient

Function (optional). Gradient of the objective function.

hessian

Function (optional). Hessian matrix of the objective function.

lower

Numeric vector. Lower bounds for box constraints.

upper

Numeric vector. Upper bounds for box constraints.

control

List. Control parameters including convergence flags:

  • use_abs_f: Logical. Use absolute change in objective for convergence.

  • use_rel_f: Logical. Use relative change in objective for convergence.

  • use_abs_x: Logical. Use absolute change in parameters for convergence.

  • use_rel_x: Logical. Use relative change in parameters for convergence.

  • use_grad: Logical. Use gradient norm for convergence.

  • use_posdef: Logical. Verify positive definiteness at convergence.

  • use_pred_f: Logical. Record predicted objective decrease.

  • use_pred_f_avg: Logical. Record average predicted decrease.

  • diff_method: String. Method for numerical differentiation.

...

Additional arguments passed to objective, gradient, and Hessian functions.

Details

newton_raphson provides a classic second-order optimization approach.

Comparison with Modified Newton: Unlike modified_newton, this function does not apply dynamic ridge adjustments (Levenberg-Marquardt style) to the Hessian. If the Hessian is singular or cannot be inverted via solve(), the algorithm will terminate. This "pure" implementation is often preferred in simulation studies where the behavior of the exact Newton step is of interest.

Predicted Decrease: This function explicitly calculates the Predicted Decrease (pred\_dec), which is the expected reduction in the objective function value based on the local quadratic model:

m(p) = f + g^T p + \frac{1}{2} p^T H p

Stability and Simulations: All return values are explicitly cast to scalars (e.g., as.numeric, as.logical) to ensure stability when the function is called within large-scale simulation loops or packaged into data frames.

Value

A list containing optimization results and iteration metadata.

References

  • Nocedal, J., & Wright, S. J. (2006). Numerical Optimization. Springer.

  • Bollen, K. A. (1989). Structural Equations with Latent Variables. Wiley.

Examples

quad <- function(x) (x[1] - 2)^2 + (x[2] + 1)^2
res <- newton_raphson(start = c(0, 0), objective = quad)
print(res$par)

optimflex documentation built on April 11, 2026, 5:06 p.m.