View source: R/18_biggs_exp6.R
biggs_exp6 | R Documentation |
Test function 18 from the More', Garbow and Hillstrom paper.
biggs_exp6(m = 13)
m |
Number of summand functions in the objective function. Should be equal to or greater than 6. |
The objective function is the sum of m
functions, each of n
parameters.
Dimensions: Number of parameters n = 6
, number of summand
functions m >= n
.
Minima: f = 5.65565...e-3
if m = 13
;
not reported in the MGH (1981) paper is (f = 0)
at
c(1, 10, 1, 5, 4, 3)
and c(4, 10, 3, 5, 1, 1)
(and probably others) for all m
(probably: I stopped testing after
m = 1000
).
A list containing:
fn
Objective function which calculates the value given input
parameter vector.
gr
Gradient function which calculates the gradient vector
given input parameter vector.
he
If available, the hessian matrix (second derivatives)
of the function w.r.t. the parameters at the given values.
fg
A function which, given the parameter vector, calculates
both the objective value and gradient, returning a list with members
fn
and gr
, respectively.
x0
Standard starting point.
fmin
reported minimum
xmin
parameters at reported minimum
More', J. J., Garbow, B. S., & Hillstrom, K. E. (1981). Testing unconstrained optimization software. ACM Transactions on Mathematical Software (TOMS), 7(1), 17-41. \Sexpr[results=rd]{tools:::Rd_expr_doi("doi.org/10.1145/355934.355936")}
Biggs, M. C. (1971). Minimization algorithms making use of non-quadratic properties of the objective function. IMA Journal of Applied Mathematics, 8(3), 315-327.
fun <- biggs_exp6()
# Optimize using the standard starting point
x0 <- fun$x0
res_x0 <- stats::optim(par = x0, fn = fun$fn, gr = fun$gr, method =
"L-BFGS-B")
# Use your own starting point
res <- stats::optim(1:6, fun$fn, fun$gr, method = "L-BFGS-B")
# Use 20 summand functions
fun20 <- biggs_exp6(m = 20)
res <- stats::optim(fun20$x0, fun20$fn, fun20$gr, method = "L-BFGS-B")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.