grad.desc: Gradient Descent Algorithm for the 2D case

Description Usage Arguments Details Value Note Author(s) References See Also

View source: R/grad.desc.R

Description

This function provids a visual illustration for the process of minimizing a real-valued function through Gradient Descent Algorithm.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
grad.desc(
  FUN = function(x, y) x^2 + 2 * y^2,
  rg = c(-3, -3, 3, 3),
  init = c(-3, 3),
  gamma = 0.05,
  tol = 0.001,
  gr = NULL,
  len = 50,
  interact = FALSE,
  col.contour = "red",
  col.arrow = "blue",
  main
)

Arguments

FUN

a bivariate objective function to be minimized (variable names do not have to be x and y); if the gradient argument gr is NULL, deriv will be used to calculate the gradient, in which case we should not put braces around the function body of FUN (e.g. the default function is function(x, y) x^2 + 2 * y^2)

rg

ranges for independent variables to plot contours; in a c(x0, y0, x1, y1) form

init

starting values

gamma

size of a step

tol

tolerance to stop the iterations, i.e. the minimum difference between F(x[i]) and F(x[i+1])

gr

the gradient of FUN; it should be a bivariate function to calculate the gradient (not the negative gradient!) of FUN at a point (x,y), e.g. function(x, y) 2 * x + 4 * y. If it is NULL, R will use deriv to calculate the gradient

len

desired length of the independent sequences (to compute z values for contours)

interact

logical; whether choose the starting values by clicking on the contour plot directly?

col.contour, col.arrow

colors for the contour lines and arrows respectively (default to be red and blue)

main

the title of the plot; if missing, it will be derived from FUN

Details

Gradient descent is an optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or the approximate gradient) of the function at the current point. If instead one takes steps proportional to the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.

The arrows are indicating the result of iterations and the process of minimization; they will go to a local minimum in the end if the maximum number of iterations ani.options('nmax') has not been reached.

Value

A list containing

par

the solution for the local minimum

value

the value of the objective function corresponding to par

iter

the number of iterations; if it is equal to ani.options('nmax'), it's quite likely that the solution is not reliable because the maximum number of iterations has been reached

gradient

the gradient function of the objective function

persp

a function to make the perspective plot of the objective function; can accept further arguments from persp (see the examples below)

Note

Please make sure the function FUN provided is differentiable at init, what's more, it should also be 'differentiable' using deriv if you do not provide the gradient function gr.

If the arrows cannot reach the local minimum, the maximum number of iterations nmax in ani.options may need to be increased.

Author(s)

Yihui Xie

References

Examples at https://yihui.org/animation/example/grad-desc/

See Also

deriv, persp, contour, optim


animation documentation built on Oct. 7, 2021, 9:18 a.m.