| mlr_optimizers_nloptr | R Documentation |
OptimizerBatchNLoptr class that implements non-linear optimization.
Calls nloptr::nloptr() from package nloptr.
algorithmcharacter(1)
Algorithm to use.
See nloptr::nloptr.print.options() for available algorithms.
x0numeric()
Initial parameter values.
Use start_values parameter to create "random" or "center" start values.
start_valuescharacter(1)
Create "random" start values or based on "center" of search space?
In the latter case, it is the center of the parameters before a trafo is applied.
Custom start values can be passed via the x0 parameter.
approximate_eval_grad_flogical(1)
Should gradients be numerically approximated via finite differences (nloptr::nl.grad).
Only required for certain algorithms.
Note that function evaluations required for the numerical gradient approximation will be logged as usual and are not treated differently than regular function evaluations by, e.g., Terminators.
For the meaning of other control parameters, see nloptr::nloptr() and nloptr::nloptr.print.options().
The algorithm can terminated with all Terminators. Additionally, the following internal termination parameters can be used:
stopvalnumeric(1)
Stop value.
Deactivate with -Inf.
Default is -Inf.
maxtimeinteger(1)
Maximum time.
Deactivate with -1L.
Default is -1L.
maxevalinteger(1)
Maximum number of evaluations.
Deactivate with -1L.
Default is -1L.
xtol_relnumeric(1)
Relative tolerance.
Original default is 10^-4.
Deactivate with -1.
Overwritten with -1.
xtol_absnumeric(1)
Absolute tolerance.
Deactivate with -1.
Default is -1.
ftol_relnumeric(1)
Relative tolerance.
Deactivate with -1.
Default is -1.
ftol_absnumeric(1)
Absolute tolerance.
Deactivate with -1.
Default is -1.
$optimize() supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress() to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress").
bbotk::Optimizer -> bbotk::OptimizerBatch -> OptimizerBatchNLoptr
new()Creates a new instance of this R6 class.
OptimizerBatchNLoptr$new()
clone()The objects of this class are cloneable with this method.
OptimizerBatchNLoptr$clone(deep = FALSE)
deepWhether to make a deep clone.
Johnson, G S (2020). “The NLopt nonlinear-optimization package.” https://github.com/stevengj/nlopt.
# example only runs if nloptr is available
if (mlr3misc::require_namespaces("nloptr", quietly = TRUE)) {
# define the objective function
fun = function(xs) {
list(y = - (xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10)
}
# set domain
domain = ps(
x1 = p_dbl(-10, 10),
x2 = p_dbl(-5, 5)
)
# set codomain
codomain = ps(
y = p_dbl(tags = "maximize")
)
# create objective
objective = ObjectiveRFun$new(
fun = fun,
domain = domain,
codomain = codomain,
properties = "deterministic"
)
# initialize instance
instance = oi(
objective = objective,
terminator = trm("evals", n_evals = 20)
)
# load optimizer
optimizer = opt("nloptr", algorithm = "NLOPT_LN_BOBYQA")
# trigger optimization
optimizer$optimize(instance)
# all evaluated configurations
instance$archive
# best performing configuration
instance$result
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.