optim: optim Function

optimR Documentation

optim Function

Description

The function to compute gradient for, is a wrapper function containing 'fn' hence do not need these extra arguments. For every call to 'gr', the function to compute the gradient of will change, and the point to compute the gradient is always rep(0, length(x)).

Usage

optim(
  par,
  fn,
  gr = NULL,
  ...,
  control = list(),
  hessian = FALSE,
  smart = TRUE,
  gr.args = list()
)

Arguments

fn

A function to be minimized (or maximized),

gr

is an optional generic gradient function of type: gr(fun, x, ...) and it returns the gradient for the "BFGS"

The default gradient function use a central differences with fixed step.size (argument step.size=0.001)

control

a list of control parameters. See stats::optim function for details.

hessian

Logical. Do you want to return the hessian?

smart

Logical. Do you want to applay Smart Gradient Technique on the gradient. Default is TRUE.

gr.args

Specific arguments to 'gr' needs to be passed.

'...'

are optional arguments to 'fn': fn(x, ...)

Details

More details about this function can be found in stats::optim.

Examples

myfun <- function(x) { 100 * (x[2] - x[1]^2)^2 + (1 - x[1])^2}

mygrad  <- function(fun,x) {
 h = 0.001
 grad <- numeric(2)
 grad[1] <- (fun(x + c(h,0)) - fun(x - c(h,0))) / (2 * h)
 grad[2] <- (fun(x + c(0,h)) - fun(x - c(0,h))) / (2 * h)
 return (grad)
}

x_initial = rnorm(2)
result  <- smartGrad::optim(par = x_initial,
                           fn=myfun,
                           gr = mygrad,
                           method = c("BFGS"),
                           smart = TRUE)

esmail-abdulfattah/smartGrad documentation built on March 19, 2022, 3:01 p.m.