| optim | R Documentation |
The function to compute gradient for, is a wrapper function containing 'fn' hence do not need these extra arguments. For every call to 'gr', the function to compute the gradient of will change, and the point to compute the gradient is always rep(0, length(x)).
optim( par, fn, gr = NULL, ..., control = list(), hessian = FALSE, smart = TRUE, gr.args = list() )
fn |
A function to be minimized (or maximized), |
gr |
is an optional generic gradient function of type: gr(fun, x, ...) and it returns the gradient for the "BFGS" The default gradient function use a central differences with fixed step.size (argument step.size=0.001) |
control |
a list of control parameters. See stats::optim function for details. |
hessian |
Logical. Do you want to return the hessian? |
smart |
Logical. Do you want to applay Smart Gradient Technique on the gradient. Default is TRUE. |
gr.args |
Specific arguments to 'gr' needs to be passed. |
'...' |
are optional arguments to 'fn': fn(x, ...) |
More details about this function can be found in stats::optim.
myfun <- function(x) { 100 * (x[2] - x[1]^2)^2 + (1 - x[1])^2}
mygrad <- function(fun,x) {
h = 0.001
grad <- numeric(2)
grad[1] <- (fun(x + c(h,0)) - fun(x - c(h,0))) / (2 * h)
grad[2] <- (fun(x + c(0,h)) - fun(x - c(0,h))) / (2 * h)
return (grad)
}
x_initial = rnorm(2)
result <- smartGrad::optim(par = x_initial,
fn=myfun,
gr = mygrad,
method = c("BFGS"),
smart = TRUE)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.