funconstrain | R Documentation |
The funconstrain package provides 35 test functions taken from the paper of More', Garbow and Hillstrom, useful for testing numerical optimization methods.
The functions all take the form of a nonlinear least squares problem, with
the goal of minimizing the output of m
functions, each of which is
a function, f_i
of the same n
parameters:
f\left(\left[x_1, x_2, \ldots, x_n \right ]\right) =
\sum_{i = 1}^{m}f_{i}^2\left(\left[x_1, x_2, \ldots, x_n \right ]\right)
The documentation of each function provides details on:
m
The number of summand functions, f_i
. Some
functions are defined for a fixed m
, others require the user to
specify a value of m
.
n
The numer of parameters. Some functions are defined for a
fixed value of n
, others allow different values of n
to be
specified. This is done by passing a vector of parameters of the desired
length to the objective function.
The values and locations of minima. Some functions have multiple minima, which can make them less desirable for comparing methods. Some functions have only the value of the objective function at the minima specified, and not the location.
Most numerical optimization methods use both the objective value and the gradient of the value with respect to the parameters. So for a given test problem, you will want both the objective function itself, a function to calculate the gradient and a place to start the optimization from. The functions provided by funconstrain are therefore not used directly in an optimization method. They are factory functions which will generate the objective and gradient functions you want.
The test problems are all generated by calling the desired function. Some functions have the following parameter:
m
The number of summand functions as described above. This
needs only to be provided when the test problem allows for variable
m
. Default values are provided in all these cases, but be aware
that the defaults were chosen abritrarily, and some functions put
a restriction on the acceptable value of m
based on the number
of parameters, n
. If in doubt, ensure it's specified explicitly.
The return value is a list containing:
fn
The objective function. This takes a numeric vector of
length n
representing the set of parameters to be optimized. It
returns a scalar, the value of the objective function.
gr
The gradient function of the objective. This takes a
numeric vector of length n
representing the set of parameters to be
optimized. It returns a vector also of length n
, representing the
gradient of the objective function with respect to the n
parameters.
fg
A function which calculates both the objective function and
the gradient in one call. This takes a numeric vector of length n
representing the set of parameters to be optimized. It returns a list
with two members: fn
, the scalar objective function; and gr
,
the gradient vector. This is a convenience function: not all optimization
methods can make use of it, but for those that can, it is common to require
both the function and gradient for a given set of parameters, and there is
often a lot of shared calculations that can make calculating both at the
same time more efficient than calling the fn
and gr
functions
separately.
x0
A suggested starting location. For those functions where
the number of parameters, n
is fixed, this is a fixed length
numeric vector. Where n
can have multiple values, this is a function
which takes one value, n
and returns a numeric vector of length
n
. Where x0
is a function, n
also has a default. Like
m
, these have been chosen arbitrarily, but with the idea that if
you were to use these test functions in the order provided in the MGH
paper, the value of n
increases from 2
to 50
.
fmin
the reported minimum function value
xmin
a numeric vector with the reported minimum parameters
It's much more straightforward than it sounds. See the 'Examples' section below or the examples in each function.
The names of the test functions are given below, using the same numbering as in the original More, Garbow and Hillstrom paper:
rosen
Rosenbrock function.
freud_roth
Freudenstein and Roth function.
powell_bs
Powell badly scaled function.
brown_bs
Brown badly scaled function.
beale
Beale function.
jenn_samp
Jennrich and Sampson function.
helical
Helical valley function.
bard
Bard function.
gauss
Gaussian function.
meyer
Meyer function.
gulf
Gulf research and development function.
box_3d
Box three-dimensional function.
powell_s
Powell singular function.
wood
Wood function.
kow_osb
Kowalik and Osborne function.
brown_den
Brown and Dennis function.
osborne_1
Osborne 1 function.
biggs_exp6
Biggs EXP6 function.
osborne_2
Osborne 2 function.
watson
Watson function.
ex_rosen
Extended Rosenbrock function.
ex_powell
Extended Powell function.
penalty_1
Penalty function I.
penalty_2
Penalty function II.
var_dim
Variable dimensioned function.
trigon
Trigonometric function.
brown_al
Brown almost-linear function.
disc_bv
Discrete boundary value function.
disc_ie
Discrete integral equation function.
broyden_tri
Broyden tridiagonal function.
broyden_band
Broyden banded function.
linfun_fr
Linear function - full rank.
linfun_r1
Linear function - rank 1.
linfun_r1z
Linear function - rank 1 with zero columns
and rows.
chebyquad
Chebyquad function.
For details, see the specific function help text.
Maintainer: James Melville jlmelville@gmail.com
Other contributors:
John C Nash nashjc@uottawa.ca [contributor]
More', J. J., Garbow, B. S., & Hillstrom, K. E. (1981). Testing unconstrained optimization software. ACM Transactions on Mathematical Software (TOMS), 7(1), 17-41. \Sexpr[results=rd]{tools:::Rd_expr_doi("doi.org/10.1145/355934.355936")}
Useful links:
Report bugs at https://github.com/jlmelville/funconstrain/issues
# Fixed m and n
# The famous Rosenbrock function has fixed m and n (2 in each case)
rbrock <- rosen()
# Pass the objective function, gradient and starting point to an optimization
# method:
res <- stats::optim(par = rbrock$x0, fn = rbrock$fn, gr = rbrock$gr,
method = "L-BFGS-B")
# Or feel free to ignore the suggested starting point and use your own:
res <- stats::optim(par = c(1.2, 1.2), fn = rbrock$fn, gr = rbrock$gr,
method = "L-BFGS-B")
# Multiple m, fixed n
# The gulf test problem allows for multiple m:
gulf_m10 <- gulf(m = 10)
res_m10 <- stats::optim(par = gulf_m10$x0, fn = gulf_m10$fn, gr =
gulf_m10$gr, method = "L-BFGS-B")
# Using a different m will give different results, although in the case
# of the gulf problem you reach the same minimum.
gulf_m20 <- gulf(m = 20)
res_m20 <- stats::optim(par = gulf_m20$x0, fn = gulf_m20$fn, gr =
gulf_m20$gr, method = "L-BFGS-B")
# Fixed m, multiple n
# The Chebyquad function is defined for variable values of n, but the value
# of m is fixed
cheby <- chebyquad()
# To use different values of n, we provide it to the starting point x0, which
# is a function when n can take multiple values.
# A five-parameter version:
res_n5 <- stats::optim(par = cheby$x0(n = 5), fn = cheby$fn, gr = cheby$gr,
method = "L-BFGS-B")
# And a 10-parameter version:
res_n10 <- stats::optim(par = cheby$x0(n = 10), fn = cheby$fn, gr = cheby$gr,
method = "L-BFGS-B")
# Multiple m, multiple n
# The linear function full rank function requires both m and n to be
# specified:
lf_m10 <- linfun_fr(m = 10)
# The n = 10, m = 10 solution:
res_m10_n10 <- stats::optim(par = lf_m10$x0(n = 10), fn = lf_m10$fn, gr =
lf_m10$gr, method = "L-BFGS-B")
# The n = 5, m = 10 solution:
res_m10_n5 <- stats::optim(par = lf_m10$x0(n = 5), fn = lf_m10$fn, gr =
lf_m10$gr, method = "L-BFGS-B")
# Repeat the above, but this time with m = 20
lf_m20 <- linfun_fr(m = 20)
# The n = 10, m = 20 solution:
res_m20_n10 <- stats::optim(par = lf_m20$x0(n = 10), fn = lf_m20$fn, gr =
lf_m10$gr, method = "L-BFGS-B")
# The n = 5, m = 20 solution:
res_m20_n5 <- stats::optim(par = lf_m20
$x0(n = 5), fn = lf_m20$fn, gr =
lf_m10$gr, method = "L-BFGS-B")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.