View source: R/double_dogleg.R
| double_dogleg | R Documentation |
Implements the Double Dogleg Trust-Region algorithm for non-linear optimization.
double_dogleg(
start,
objective,
gradient = NULL,
hessian = NULL,
lower = -Inf,
upper = Inf,
control = list(),
...
)
start |
Numeric vector. Starting values for the optimization parameters. |
objective |
Function. The objective function to minimize. |
gradient |
Function (optional). Gradient of the objective function. |
hessian |
Function (optional). Hessian matrix of the objective function. |
lower |
Numeric vector. Lower bounds for box constraints. |
upper |
Numeric vector. Upper bounds for box constraints. |
control |
List. Control parameters including convergence flags starting with 'use_'.
|
... |
Additional arguments passed to objective, gradient, and Hessian functions. |
This function implements the Double Dogleg method within a Trust-Region framework, primarily based on the work of Dennis and Mei (1979).
Trust-Region vs. Line Search:
While Line Search methods (like BFGS) first determine a search direction and then
find an appropriate step length, Trust-Region methods define a neighborhood
around the current point (the trust region with radius \Delta) where a local
quadratic model is assumed to be reliable. The algorithm then finds a step that
minimizes this model within the radius. This approach is generally more robust,
especially when the Hessian is not positive definite.
Powell's Dogleg vs. Double Dogleg:
Powell's original Dogleg method (1970) constructs a trajectory consisting of
two line segments: one from the current point to the Cauchy point, and another
from the Cauchy point to the Newton point. The "Double Dogleg" modification
by Dennis and Mei (1979) introduces an intermediate "bias" point (p_W)
along the Newton direction.
Cauchy Point (p_C): The minimizer of the quadratic model along
the steepest descent direction.
Newton Point (p_N): The minimizer of the quadratic model (B^{-1}g).
Double Dogleg Point (p_W): A point defined as \gamma \cdot p_N,
where \gamma is a scaling factor (bias) that ensures the path stays
closer to the Newton direction while maintaining monotonic descent in
the model.
This modification allows the algorithm to perform more like a Newton method earlier in the optimization process compared to the standard Dogleg.
A list containing optimization results and iteration metadata.
Dennis, J. E., & Mei, H. H. (1979). Two New Unconstrained Optimization Algorithms which use Function and Gradient Values. Journal of Optimization Theory and Applications, 28(4), 453-482.
Powell, M. J. D. (1970). A Hybrid Method for Nonlinear Equations. Numerical Methods for Nonlinear Algebraic Equations.
Nocedal, J., & Wright, S. J. (2006). Numerical Optimization. Springer.
quad <- function(x) (x[1] - 2)^2 + (x[2] + 1)^2
res <- double_dogleg(start = c(0, 0), objective = quad)
print(res$par)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.