csolnp | R Documentation |
Nonlinear optimization using augmented Lagrange method (C++ version)
csolnp(
pars,
fn,
gr = NULL,
eq_fn = NULL,
eq_b = NULL,
eq_jac = NULL,
ineq_fn = NULL,
ineq_lower = NULL,
ineq_upper = NULL,
ineq_jac = NULL,
lower = NULL,
upper = NULL,
control = list(),
use_r_version = FALSE,
...
)
pars |
an numeric vector of decision variables (length n). |
fn |
the objective function (must return a scalar). |
gr |
an optional function for computing the analytic gradient of the function (must return a vector of length n). |
eq_fn |
an optional function for calculating equality constraints. |
eq_b |
a vector of the equality bounds (if eq_fn provided). |
eq_jac |
an optional function for computing the analytic jacobian of the equality. function (a matrix with number of columns n and number of rows the same length as the number of equalities). |
ineq_fn |
an optional function for calculating inequality constraints. |
ineq_lower |
the lower bounds for the inequality (must be finite) |
ineq_upper |
the upper bounds for the inequalitiy (must be finite) |
ineq_jac |
an optional function for computing the analytic jacobian of the inequality (a matrix with number of columns n and number of rows the same length as the number of inequalities). |
lower |
lower bounds for the parameters. This is strictly required. |
upper |
upper bounds for the parameters. This is strictly required. |
control |
a list of solver control parameters (see details). |
use_r_version |
(logical) used for debugging and validation. Uses the R version of the solver rather than the C++ version. Will be deprecated in future releases. |
... |
additional arguments passed to the supplied functions (common to all functions supplied). |
The optimization problem solved by csolnp
is formulated as:
\begin{aligned}
\min_{x \in \mathbb{R}^n} \quad & f(x) \\
\text{s.t.} \quad & g(x) = b \\
& h_l \le h(x) \le h_u\\
& x_l \le x \le x_u\\
\end{aligned}
where f(x)
is the objective function, g(x)
is the vector of equality constraints
with target value b
, h(x)
is the vector of inequality constraints bounded
by h_l
and h_u
, with parameter bounds x_l
and x_u
. Internally,
inequality constraints are converted into equality constraints using slack variables
and solved using an augmented Lagrangian approach.
This function is based on the original R code, but converted to C++, making use of
Rcpp
and RcppArmadillo
.
Additionally, it allows the user to pass in analytic gradient and Jacobians, else
finite differences using functions from the numDeriv
package are used.
The control list consists of the following options:
Numeric. Initial penalty parameter for the augmented Lagrangian. Controls the weight given to constraint violation in the objective. Default is 1
.
Integer. Maximum number of major (outer) iterations allowed. Default is 400
.
Integer. Maximum number of minor (inner) iterations (per major iteration) for the quadratic subproblem solver. Default is 800
.
Numeric. Convergence tolerance for both feasibility (constraint violation) and optimality (change in objective). The algorithm terminates when changes fall below this threshold. Default is 1e-8
.
Integer If 1
, prints progress, 2
includes diagnostic information during optimization. Default is 0
.
Tracing information provides the following:
The current major iteration number.
The value of the objective function f(x)
at the current iterate.
The norm of the current constraint violation, summarizing how well all constraints (equality and inequality) are satisfied. Typically the Euclidean or infinity norm.
The relative change in the objective function value compared to the previous iteration, i.e., |f_k - f_{k-1}| / max(1, |f_{k-1}|)
.
The norm of the parameter update taken in this iteration, i.e., ||x_k - x_{k-1}||
.
The current value of the penalty parameter (\rho
) in the augmented Lagrangian. This parameter is adaptively updated to balance objective minimization and constraint satisfaction.
A list with the following slot:
The parameters at the optimal solution found.
The value of the objective at the optimal solution found.
A vector of objective values obtained at each outer iteration.
The number of outer iterations used to arrive at the solution.
The convergence code (0 = converged).
The convergence message.
A list of optimal solution diagnostics.
The vector of Lagrange multipliers at the optimal solution found.
The number of function evaluations.
The time taken to find a solution.
The Hessian at the optimal solution.
Alexios Galanos
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.