double_dogleg: Double Dogleg Trust-Region Optimization

View source: R/double_dogleg.R

double_doglegR Documentation

Double Dogleg Trust-Region Optimization

Description

Implements the Double Dogleg Trust-Region algorithm for non-linear optimization.

Usage

double_dogleg(
  start,
  objective,
  gradient = NULL,
  hessian = NULL,
  lower = -Inf,
  upper = Inf,
  control = list(),
  ...
)

Arguments

start

Numeric vector. Starting values for the optimization parameters.

objective

Function. The objective function to minimize.

gradient

Function (optional). Gradient of the objective function.

hessian

Function (optional). Hessian matrix of the objective function.

lower

Numeric vector. Lower bounds for box constraints.

upper

Numeric vector. Upper bounds for box constraints.

control

List. Control parameters including convergence flags starting with 'use_'.

  • use_abs_f: Logical. Use absolute change in objective for convergence.

  • use_rel_f: Logical. Use relative change in objective for convergence.

  • use_abs_x: Logical. Use absolute change in parameters for convergence.

  • use_rel_x: Logical. Use relative change in parameters for convergence.

  • use_grad: Logical. Use gradient norm for convergence.

  • use_posdef: Logical. Verify positive definiteness at convergence.

  • use_pred_f: Logical. Record predicted objective decrease.

  • use_pred_f_avg: Logical. Record average predicted decrease.

...

Additional arguments passed to objective, gradient, and Hessian functions.

Details

This function implements the Double Dogleg method within a Trust-Region framework, primarily based on the work of Dennis and Mei (1979).

Trust-Region vs. Line Search: While Line Search methods (like BFGS) first determine a search direction and then find an appropriate step length, Trust-Region methods define a neighborhood around the current point (the trust region with radius \Delta) where a local quadratic model is assumed to be reliable. The algorithm then finds a step that minimizes this model within the radius. This approach is generally more robust, especially when the Hessian is not positive definite.

Powell's Dogleg vs. Double Dogleg: Powell's original Dogleg method (1970) constructs a trajectory consisting of two line segments: one from the current point to the Cauchy point, and another from the Cauchy point to the Newton point. The "Double Dogleg" modification by Dennis and Mei (1979) introduces an intermediate "bias" point (p_W) along the Newton direction.

  • Cauchy Point (p_C): The minimizer of the quadratic model along the steepest descent direction.

  • Newton Point (p_N): The minimizer of the quadratic model (B^{-1}g).

  • Double Dogleg Point (p_W): A point defined as \gamma \cdot p_N, where \gamma is a scaling factor (bias) that ensures the path stays closer to the Newton direction while maintaining monotonic descent in the model.

This modification allows the algorithm to perform more like a Newton method earlier in the optimization process compared to the standard Dogleg.

Value

A list containing optimization results and iteration metadata.

References

  • Dennis, J. E., & Mei, H. H. (1979). Two New Unconstrained Optimization Algorithms which use Function and Gradient Values. Journal of Optimization Theory and Applications, 28(4), 453-482.

  • Powell, M. J. D. (1970). A Hybrid Method for Nonlinear Equations. Numerical Methods for Nonlinear Algebraic Equations.

  • Nocedal, J., & Wright, S. J. (2006). Numerical Optimization. Springer.

Examples

quad <- function(x) (x[1] - 2)^2 + (x[2] + 1)^2
res <- double_dogleg(start = c(0, 0), objective = quad)
print(res$par)

optimflex documentation built on April 11, 2026, 5:06 p.m.