backtracking: Backtracking line search.

Description Usage Arguments Details Value References Examples

View source: R/backtracking.R

Description

backtracking is an approximate line search method to determine a step length that makes a reasonable reduction in function value in a given search direction.

Usage

1
backtracking(obj.list, x.list, searchD, alpha0 = 1, rho = 0.5, c = 1e-04)

Arguments

obj.list

A list with the problem functions.

x.list

A list with the current solution. It must have the names
x: a vector with its value in the search space
fx: a scalar with its objective value
dfx: a vector with its gradient value

searchD

A search direction for the method.

alpha0

Initial step size, usually set to 1 in most Newton-based methods.

rho

A constant to reduce alpha in each iteration, control parameter.

c

A small constant, control parameter.

Details

The idea is, given a starting point x and a descent direction s, the minimization of f(x + alpha*s) proceeds by setting alpha = 1 (or another value if provided) and iteratively reducing it by a factor rho (set to 0.5 as default) until the following condition is satisfied:

f(x + alpha*s) <= f(x) + c*alpha*(t(df(x))*s)

wherein c is a small constant (set to c = 1e-4 here) and df(x) is the gradient computed at the starting point x.0

Value

Returns the step length alpha that provides a reasonable decrease in function value.

References

  1. Nocedal, Jorge; Wright, Stephen J.; Numerical Optimization, 2nd ed., page 37.

  2. Wikipedia, Backtracking line search https://en.wikipedia.org/wiki/Backtracking_line_search.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# A quadratic translated function and its gradient
f <- function (x)
{
 i <- seq(1, length(x))
 return (sum( (x - i)^2 ))
}
fun <- list(f = f)
df <- function(x)
{
  i <- seq(1, length(x))
  return (2*(x - i))
}
# Get the current point in the form of a list
x <- c(0,1,0,1) #current point
X.list <- list(x = x, fx = f(x), dfx = df(x))
searchD <- -df(x) #search direction (negative of the gradient, for instance)
# Backtracking with default parameters
alpha.opt <- backtracking(fun, X.list, searchD)
print(f(x + alpha.opt*searchD)) #optimum by coincidence
# If rho were set to, say, 0.6 instead, the result would be
alpha.opt2 <- backtracking(fun, X.list, searchD)
# Which is not the optimum in this case, but already smaller than the initial
# solution x
print(f(x + alpha.opt2*searchD))

brunasqz/NonlinearOpMethods documentation built on Oct. 27, 2019, 5:46 a.m.