Description Usage Arguments Details Value References See Also Examples
The purpose of Rcgdescent
is to minimize
an unconstrained function
of many parameters by a nonlinear conjugate gradients method.
1 2 | Rcgdescent(par, fn, gr, control =
list(maxit=500), ...)
|
par |
A numeric vector of starting estimates. |
fn |
A function that returns the value of the objective at the
supplied set of parameters |
gr |
A function that returns the gradient of the objective at the
supplied set of parameters If The use of numerical gradients for Rcgdescent is discouraged. First, the termination test uses a size measure on the gradient, and numerical gradient approximations can sometimes give results that are too large. Second, if there are bounds constraints, the step(s) taken to calculate the approximation to the derivative are NOT checked to see if they are out of bounds, and the function may be undefined at the evaluation point. There is also the option of using the routines gr="grfwd" to use the standard forward difference numerical approximation. Note that all but the |
control |
An optional list of control settings. |
... |
Further arguments to be passed to |
Functions fn
must return a numeric value.
The control
argument is a list. ?? Note that these will need
correction when package is further advanced.
A limit on the number of iterations (default 500). Note that this is
used to compute a quantity maxfeval
<-round(sqrt(n+1)*maxit) where n is the
number of parameters to be minimized.
Set 0 (default) for no output, >0 for trace output (larger values imply more output).
Tolerance used to calculate numerical gradients. Default is 1.0E-7.
See source code for Rcgdescent
for details of application.
dowarn
= TRUE if we want warnings generated by optimx. Default is TRUE.
tol
Tolerance used in testing the size of the square of the gradient.
Default is 0 on input, which uses a value of tolgr = n*n*.Machine$double.eps,
where n is the number of parameters,
in testing if crossprod(g) <= tolgr * (abs(fmin) + offset). If the user supplies
a value for tol
that is non-zero, then that value is used for tolgr.
offset=100 is only alterable by changing the code. fmin is the current best value found for the function minimum value.
Note that the scale of the gradient means that tests for a small gradient can easily be mismatched to a given problem. The defaults in Rcgdescent are a "best guess".
checkgrad
= TRUE if we want gradient function checked against numerical approximations. Default is FALSE.
checkbounds
= TRUE if we want bounds verified. Default is TRUE.
The source code Rcgdescent
for R is likely to remain a work in progress
for some time, so users should watch the console output.
A list with components:
par |
The best set of parameters found. |
value |
The value of the objective at the best set of parameters found. |
counts |
A two-element integer vector giving the number of calls to 'fn' and 'gr' respectively. This excludes those calls needed to compute the Hessian, if requested, and any calls to 'fn' to compute a finite-difference approximation to the gradient. |
convergence |
An integer code. '0' indicates successful convergence. '1' indicates that the function evaluation count 'maxfeval' was reached. '2' indicates initial point is infeasible. |
message |
A character string giving any additional information returned by the optimizer, or 'NULL'. |
Hager W. W. and H. Zhang, A new conjugate gradient method with guaranteed descent and an efficient line search, SIAM Journal on Optimization, 16 (2005), 170-192.
Hager W. W. and H. Zhang, Algorithm 851: CG_DESCENT, A conjugate gradient method with guaranteed descent, ACM Transactions on Mathematical Software, 32 (2006), 113-137.
Hager W. W. and H. Zhang, A survey of nonlinear conjugate gradient methods, Pacific Journal of Optimization, 2 (2006), pp. 35-58.
Hager W. W. and H. Zhang, Limited memory conjugate gradients, www.math.ufl.edu/~hager/papers/CG/lcg.pdf
Nash J. C. (1979). Compact Numerical Methods for Computers: Linear Algebra and Function Minimisation. Adam Hilger, Bristol. Second Edition, 1990, Bristol: Institute of Physics Publications.
Nash, J. C. and M. Walker-Smith (1987). Nonlinear Parameter Estimation: An Integrated System in BASIC. New York: Marcel Dekker. See http://www.nashinfo.com/nlpe.htm for a downloadable version of this plus some extras.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 | #####################
require(numDeriv)
require(Rcgdescent)
## Rosenbrock Banana function
fr <- function(x) {
x1 <- x[1]
x2 <- x[2]
100 * (x2 - x1 * x1)^2 + (1 - x1)^2
}
grr <- function(x) { ## Gradient of 'fr'
x1 <- x[1]
x2 <- x[2]
c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1),
200 * (x2 - x1 * x1))
}
grn<-function(x){
gg<-grad(fr, x)
}
# myctrl <- ControlParams()
ansrosenbrock0 <- Rcgdescent(par=c(-1.2,1), fn=fr, gr=grr)
print(ansrosenbrock0)
cat("\n\n Compare to optim::CG, Polak Ribiere\n")
aoptimCG0 <- optim(par=c(-1.2,1), fn=fr, gr=grr, method="CG", control=list(type=2, maxit=500))
print(aoptimCG0)
ansrosenbrock0n <- Rcgdescent(par=c(-1.2,1), fn=fr, gr=grr)
print(ansrosenbrock0n)
cat("\n\n Compare to optim::CG, Polak Ribiere\n")
aoptimCG0 <- optim(par=c(-1.2,1), fn=fr, gr=grr, method="CG", control=list(type=2, maxit=500))
print(aoptimCG0)
cat("No gradient specified\n")
aoptimCG00n <- optim(par=c(-1.2,1), fn=fr, method="CG", control=list(type=2, maxit=500))
print(aoptimCG00n)
#####################
genrose.f<- function(x, gs=NULL){ # objective function
## One generalization of the Rosenbrock banana valley function (n parameters)
n <- length(x)
if(is.null(gs)) { gs=100.0 }
fval<-1.0 + sum (gs*(x[1:(n-1)]^2 - x[2:n])^2 + (x[2:n] - 1)^2)
return(fval)
}
genrose.g <- function(x, gs=NULL){
# vectorized gradient for genrose.f
# Ravi Varadhan 2009-04-03
n <- length(x)
if(is.null(gs)) { gs=100.0 }
gg <- as.vector(rep(0, n))
tn <- 2:n
tn1 <- tn - 1
z1 <- x[tn] - x[tn1]^2
z2 <- 1 - x[tn]
gg[tn] <- 2 * (gs * z1 - z2)
gg[tn1] <- gg[tn1] - 4 * gs * x[tn1] * z1
gg
}
# analytic gradient test
xx<-rep(pi,10)
genrosea<-Rcgdescent(par=xx, fn=genrose.f, gr=genrose.g, gs=10)
print(genrosea)
cat("\n\n Compare to optim::CG, Polak Ribiere\n")
genroseoptim <- optim(par=xx, fn=genrose.f, gr=genrose.g, method="CG", control=list(type=2), gs=10)
print(genroseoptim)
######### One dimension test
nn<-1
sqtst<-function(xx) {
res<-sum((xx-2)*(xx-2))
}
gsqtst<-function(xx) {
gg<-2*(xx-2)
}
startx<-rep(0,nn)
onepar<-Rcgdescent(par=startx, fn=sqtst, gr=gsqtst)
print(onepar)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.