# performWeightedBisectionOptimization: Performs a weighted bi-section optimization. In kerschke/mogsa: A Multi-Objective Optimization Algorithm Based on Multi-Objective Gradients

## Description

Weighted version of the bisection optimization method. Given two points `x1` and `x2` on opposite sides of the optimum, this optimizer iteratively splits the interval [`x1`, `x2`] into two parts [`x1`, `x.new`] and [`x.new`, `x2`] and proceeds with the interval, whose boundaries are still located on opposite sides of the optimum. Instead to the classical bisection method, where `x.new` is the arithmetic mean of `x1` and `x2`, this version uses the lengths of the bi-objective gradients in `x1` and `x2` to compute a more promising cut-point `x.new`.

## Usage

 ```1 2 3``` ```performWeightedBisectionOptimization(x1, x2, fn1, fn2, g1 = NULL, g2 = NULL, prec.grad = 1e-06, prec.norm = 1e-06, max.steps = 1000L, lower, upper) ```

## Arguments

 `x1` [`numeric(d)`] d-dimensional individual located on one side of the (bi-objective) optimum. `x2` [`numeric(d)`] d-dimensional individual located on the opposite side (w.r.t. x1) of the (bi-objective) optimum. `fn1` [`function`] The first objective used for computing the multi-objective gradient. `fn2` [`function`] The second objective used for computing the multi-objective gradient. `g1` [`function`] The gradient of the first objective in `ind`. If missing, it will be approximated using `estimateGradientBothDirections`. `g2` [`function`] The gradient of the second objective in `ind`. If missing, it will be approximated using `estimateGradientBothDirections`. `prec.grad` [`numeric(1L)`] Precision value (= step size) used for approximating the gradient. The default is `1e-6`. `prec.norm` [`numeric(1L)`] Precision threshold when normalizing a vector. That is, every element of the vector, whose absolute value is below this threshold, will be replaced by 0. The default is `1e-6`. `max.steps` [`integer(1L)`] Maximum number of allowed bi-section steps to reach an optimum. The default is `1000L`. `lower` [`numeric(d)`] Vector of lower bounds. `upper` [`numeric(d)`] Vector of upper bounds.

## Value

[`list(4L)`]
List containing a matrix (`opt.path`) with the individuals along the optimization path, the corresponding number of function evaluations (`fn.evals`), the single-objective gradients of the last individual (`gradient.list`) and a flag, indicating whether the optimizer found a local optimum.

## Examples

 ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26``` ```# Define two single-objective test problems: fn1 = function(x) sum((x - c(2, 0))^2) fn2 = function(x) sum((x - c(0, 1))^2) # Visualize locally efficient set, i.e., the "area" where we ideally want to find a point: plot(c(2, 0), c(0, 1), type = "o", pch = 19, xlab = expression(x[1]), ylab = expression(x[2]), las = 1, asp = 1) text(2, 0, "Optimum of fn1", pos = 2, offset = 1.5) text(0, 1, "Optimum of fn2", pos = 4, offset = 1.5) # Place two points x1 and x2 on opposite sides of the bi-objective optimum: x1 = c(1, 1) x2 = c(0.5, 0) points(rbind(x1, x2), pch = 19, type = "o", lty = "dotted") text(rbind(x1, x2), labels = c("x1", "x2"), pos = 4) # Optimize using weighted bisection optimization: opt.path = performWeightedBisectionOptimization(x1 = x1, x2 = x2, fn1 = fn1, fn2 = fn2)\$opt.path # Visualize the optimization path: points(opt.path) # Highlight the found local efficient point (= local optimum w.r.t. both objectives): n = nrow(opt.path) points(opt.path[n, 1], opt.path[n, 2], pch = 4, col = "red", cex = 2) text(opt.path[n, 1], opt.path[n, 2], "Found Local Efficient Point", pos = 4, offset = 1.5) ```

kerschke/mogsa documentation built on Oct. 27, 2018, 12:13 a.m.