cobyla: Constrained Optimization by Linear Approximations

View source: R/cobyla.R

cobylaR Documentation

Constrained Optimization by Linear Approximations

Description

COBYLA is an algorithm for derivative-free optimization with nonlinear inequality and equality constraints (but see below).

Usage

cobyla(
  x0,
  fn,
  lower = NULL,
  upper = NULL,
  hin = NULL,
  nl.info = FALSE,
  control = list(),
  deprecatedBehavior = TRUE,
  ...
)

Arguments

x0

starting point for searching the optimum.

fn

objective function that is to be minimized.

lower, upper

lower and upper bound constraints.

hin

function defining the inequality constraints, that is hin>=0 for all components.

nl.info

logical; shall the original NLopt info be shown.

control

list of options, see nl.opts for help.

deprecatedBehavior

logical; if TRUE (default for now), the old behavior of the Jacobian function is used, where the equality is \ge 0 instead of \le 0. This will be reversed in a future release and eventually removed.

...

additional arguments passed to the function.

Details

It constructs successive linear approximations of the objective function and constraints via a simplex of n+1 points (in n dimensions), and optimizes these approximations in a trust region at each step.

COBYLA supports equality constraints by transforming them into two inequality constraints. This functionality has not been added to the wrapper. To use COBYLA with equality constraints, please use the full nloptr invocation.

Value

List with components:

par

the optimal solution found so far.

value

the function value corresponding to par.

iter

number of (outer) iterations, see maxeval.

convergence

integer code indicating successful completion (> 0) or a possible error number (< 0).

message

character string produced by NLopt and giving additional information.

Note

The original code, written in Fortran by Powell, was converted in C for the SciPy project.

Author(s)

Hans W. Borchers

References

M. J. D. Powell, “A direct search optimization method that models the objective and constraint functions by linear interpolation,” in Advances in Optimization and Numerical Analysis, eds. S. Gomez and J.-P. Hennart (Kluwer Academic: Dordrecht, 1994), p. 51-67.

See Also

bobyqa, newuoa

Examples


##  Solve the Hock-Schittkowski problem no. 100 with analytic gradients
##  See https://apmonitor.com/wiki/uploads/Apps/hs100.apm

x0.hs100 <- c(1, 2, 0, 4, 0, 1, 1)
fn.hs100 <- function(x) {(x[1] - 10) ^ 2 + 5 * (x[2] - 12) ^ 2 + x[3] ^ 4 +
                         3 * (x[4] - 11) ^ 2 + 10 * x[5] ^ 6 + 7 * x[6] ^ 2 +
                         x[7] ^ 4 - 4 * x[6] * x[7] - 10 * x[6] - 8 * x[7]}

hin.hs100 <- function(x) {c(
2 * x[1] ^ 2 + 3 * x[2] ^ 4 + x[3] + 4 * x[4] ^ 2 + 5 * x[5] - 127,
7 * x[1] + 3 * x[2] + 10 * x[3] ^ 2 + x[4] - x[5] - 282,
23 * x[1] + x[2] ^ 2 + 6 * x[6] ^ 2 - 8 * x[7] - 196,
4 * x[1] ^ 2 + x[2] ^ 2 - 3 * x[1] * x[2] + 2 * x[3] ^ 2 + 5 * x[6] -
 11 * x[7])
}

S <- cobyla(x0.hs100, fn.hs100, hin = hin.hs100,
      nl.info = TRUE, control = list(xtol_rel = 1e-8, maxeval = 2000),
      deprecatedBehavior = FALSE)

##  The optimum value of the objective function should be 680.6300573
##  A suitable parameter vector is roughly
##  (2.330, 1.9514, -0.4775, 4.3657, -0.6245, 1.0381, 1.5942)

S


nloptr documentation built on July 4, 2024, 1:08 a.m.