modified_newton: Modified Newton-Raphson Optimization

View source: R/modified_newton.R

modified_newtonR Documentation

Modified Newton-Raphson Optimization

Description

Implements an optimized Newton-Raphson algorithm for non-linear optimization featuring dynamic ridge adjustment and backtracking line search.

Usage

modified_newton(
  start,
  objective,
  gradient = NULL,
  hessian = NULL,
  lower = -Inf,
  upper = Inf,
  control = list(),
  ...
)

Arguments

start

Numeric vector. Starting values for the optimization parameters.

objective

Function. The objective function to minimize.

gradient

Function (optional). Gradient of the objective function.

hessian

Function (optional). Hessian matrix of the objective function.

lower

Numeric vector. Lower bounds for box constraints.

upper

Numeric vector. Upper bounds for box constraints.

control

List. Control parameters including convergence flags:

  • use_abs_f: Logical. Use absolute change in objective for convergence.

  • use_rel_f: Logical. Use relative change in objective for convergence.

  • use_abs_x: Logical. Use absolute change in parameters for convergence.

  • use_rel_x: Logical. Use relative change in parameters for convergence.

  • use_grad: Logical. Use gradient norm for convergence.

  • use_posdef: Logical. Verify positive definiteness at convergence.

  • use_pred_f: Logical. Record predicted objective decrease.

  • use_pred_f_avg: Logical. Record average predicted decrease.

  • grad_diff: String. Method for gradient differentiation.

  • hess_diff: String. Method for Hessian differentiation.

...

Additional arguments passed to objective, gradient, and Hessian functions.

Details

modified_newton is a line search optimization algorithm that utilizes second-order curvature information (the Hessian matrix) to find the minimum of an objective function.

Modified Newton vs. Trust-Region: Unlike the dogleg and double_dogleg functions which use a Trust-Region approach to constrain the step size, this function uses a Line Search approach. It first determines the Newton direction (the solution to H \Delta x = -g) and then performs a backtracking line search to find a step length \alpha that satisfies the sufficient decrease condition (Armijo condition).

Dynamic Ridge Adjustment: If the Hessian matrix H is not positive definite (making it unsuitable for Cholesky decomposition), the algorithm applies a dynamic ridge adjustment. A diagonal matrix \tau I is added to the Hessian, where \tau is increased until the matrix becomes positive definite. This ensures the search direction always remains a descent direction.

Differentiation Methods: The function allows for independent selection of differentiation methods for the gradient and Hessian:

  • forward: Standard forward-difference numerical differentiation.

  • central: Central-difference (more accurate but slower).

  • complex: Complex-step differentiation (highly accurate for gradients).

  • richardson: Richardson extrapolation via the numDeriv package.

Value

A list containing optimization results and iteration metadata.

Examples

quad <- function(x) (x[1] - 2)^2 + (x[2] + 1)^2
res <- modified_newton(start = c(0, 0), objective = quad)
print(res$par)

optimflex documentation built on April 11, 2026, 5:06 p.m.