funconstrain: funconstrain: Test functions for unconstrained minimization.

funconstrainR Documentation

funconstrain: Test functions for unconstrained minimization.

Description

The funconstrain package provides 35 test functions taken from the paper of More', Garbow and Hillstrom, useful for testing numerical optimization methods.

Details

The functions all take the form of a nonlinear least squares problem, with the goal of minimizing the output of m functions, each of which is a function, f_i of the same n parameters:

f\left(\left[x_1, x_2, \ldots, x_n \right ]\right) = \sum_{i = 1}^{m}f_{i}^2\left(\left[x_1, x_2, \ldots, x_n \right ]\right)

Function Details

The documentation of each function provides details on:

  • m The number of summand functions, f_i. Some functions are defined for a fixed m, others require the user to specify a value of m.

  • n The numer of parameters. Some functions are defined for a fixed value of n, others allow different values of n to be specified. This is done by passing a vector of parameters of the desired length to the objective function.

  • The values and locations of minima. Some functions have multiple minima, which can make them less desirable for comparing methods. Some functions have only the value of the objective function at the minima specified, and not the location.

Test Function Parameters

Most numerical optimization methods use both the objective value and the gradient of the value with respect to the parameters. So for a given test problem, you will want both the objective function itself, a function to calculate the gradient and a place to start the optimization from. The functions provided by funconstrain are therefore not used directly in an optimization method. They are factory functions which will generate the objective and gradient functions you want.

The test problems are all generated by calling the desired function. Some functions have the following parameter:

  • m The number of summand functions as described above. This needs only to be provided when the test problem allows for variable m. Default values are provided in all these cases, but be aware that the defaults were chosen abritrarily, and some functions put a restriction on the acceptable value of m based on the number of parameters, n. If in doubt, ensure it's specified explicitly.

Test Function Return Value

The return value is a list containing:

  • fn The objective function. This takes a numeric vector of length n representing the set of parameters to be optimized. It returns a scalar, the value of the objective function.

  • gr The gradient function of the objective. This takes a numeric vector of length n representing the set of parameters to be optimized. It returns a vector also of length n, representing the gradient of the objective function with respect to the n parameters.

  • fg A function which calculates both the objective function and the gradient in one call. This takes a numeric vector of length n representing the set of parameters to be optimized. It returns a list with two members: fn, the scalar objective function; and gr, the gradient vector. This is a convenience function: not all optimization methods can make use of it, but for those that can, it is common to require both the function and gradient for a given set of parameters, and there is often a lot of shared calculations that can make calculating both at the same time more efficient than calling the fn and gr functions separately.

  • x0 A suggested starting location. For those functions where the number of parameters, n is fixed, this is a fixed length numeric vector. Where n can have multiple values, this is a function which takes one value, n and returns a numeric vector of length n. Where x0 is a function, n also has a default. Like m, these have been chosen arbitrarily, but with the idea that if you were to use these test functions in the order provided in the MGH paper, the value of n increases from 2 to 50.

  • fmin the reported minimum function value

  • xmin a numeric vector with the reported minimum parameters

It's much more straightforward than it sounds. See the 'Examples' section below or the examples in each function.

Available Functions

The names of the test functions are given below, using the same numbering as in the original More, Garbow and Hillstrom paper:

  1. rosen Rosenbrock function.

  2. freud_roth Freudenstein and Roth function.

  3. powell_bs Powell badly scaled function.

  4. brown_bs Brown badly scaled function.

  5. beale Beale function.

  6. jenn_samp Jennrich and Sampson function.

  7. helical Helical valley function.

  8. bard Bard function.

  9. gauss Gaussian function.

  10. meyer Meyer function.

  11. gulf Gulf research and development function.

  12. box_3d Box three-dimensional function.

  13. powell_s Powell singular function.

  14. wood Wood function.

  15. kow_osb Kowalik and Osborne function.

  16. brown_den Brown and Dennis function.

  17. osborne_1 Osborne 1 function.

  18. biggs_exp6 Biggs EXP6 function.

  19. osborne_2 Osborne 2 function.

  20. watson Watson function.

  21. ex_rosen Extended Rosenbrock function.

  22. ex_powell Extended Powell function.

  23. penalty_1 Penalty function I.

  24. penalty_2 Penalty function II.

  25. var_dim Variable dimensioned function.

  26. trigon Trigonometric function.

  27. brown_al Brown almost-linear function.

  28. disc_bv Discrete boundary value function.

  29. disc_ie Discrete integral equation function.

  30. broyden_tri Broyden tridiagonal function.

  31. broyden_band Broyden banded function.

  32. linfun_fr Linear function - full rank.

  33. linfun_r1 Linear function - rank 1.

  34. linfun_r1z Linear function - rank 1 with zero columns and rows.

  35. chebyquad Chebyquad function.

For details, see the specific function help text.

Author(s)

Maintainer: James Melville jlmelville@gmail.com

Other contributors:

References

More', J. J., Garbow, B. S., & Hillstrom, K. E. (1981). Testing unconstrained optimization software. ACM Transactions on Mathematical Software (TOMS), 7(1), 17-41. \Sexpr[results=rd]{tools:::Rd_expr_doi("doi.org/10.1145/355934.355936")}

See Also

Useful links:

Examples


# Fixed m and n
# The famous Rosenbrock function has fixed m and n (2 in each case)
rbrock <- rosen()
# Pass the objective function, gradient and starting point to an optimization
# method:
res <- stats::optim(par = rbrock$x0, fn = rbrock$fn, gr = rbrock$gr,
method = "L-BFGS-B")
# Or feel free to ignore the suggested starting point and use your own:
res <- stats::optim(par = c(1.2, 1.2), fn = rbrock$fn, gr = rbrock$gr,
method = "L-BFGS-B")

# Multiple m, fixed n
# The gulf test problem allows for multiple m:
gulf_m10 <- gulf(m = 10)
res_m10 <- stats::optim(par = gulf_m10$x0, fn = gulf_m10$fn, gr =
gulf_m10$gr, method = "L-BFGS-B")

# Using a different m will give different results, although in the case
# of the gulf problem you reach the same minimum.
gulf_m20 <- gulf(m = 20)
res_m20 <- stats::optim(par = gulf_m20$x0, fn = gulf_m20$fn, gr =
gulf_m20$gr, method = "L-BFGS-B")

# Fixed m, multiple n
# The Chebyquad function is defined for variable values of n, but the value
# of m is fixed
cheby <- chebyquad()

# To use different values of n, we provide it to the starting point x0, which
# is a function when n can take multiple values.
# A five-parameter version:
res_n5 <- stats::optim(par = cheby$x0(n = 5), fn = cheby$fn, gr = cheby$gr,
method = "L-BFGS-B")
# And a 10-parameter version:
res_n10 <- stats::optim(par = cheby$x0(n = 10), fn = cheby$fn, gr = cheby$gr,
method = "L-BFGS-B")

# Multiple m, multiple n
# The linear function full rank function requires both m and n to be
# specified:
lf_m10 <- linfun_fr(m = 10)

# The n = 10, m = 10 solution:
res_m10_n10 <- stats::optim(par = lf_m10$x0(n = 10), fn = lf_m10$fn, gr =
lf_m10$gr, method = "L-BFGS-B")

# The n = 5, m = 10 solution:
res_m10_n5 <- stats::optim(par = lf_m10$x0(n = 5), fn = lf_m10$fn, gr =
lf_m10$gr, method = "L-BFGS-B")

# Repeat the above, but this time with m = 20
lf_m20 <- linfun_fr(m = 20)

# The n = 10, m = 20 solution:
res_m20_n10 <- stats::optim(par = lf_m20$x0(n = 10), fn = lf_m20$fn, gr =
lf_m10$gr, method = "L-BFGS-B")

# The n = 5, m = 20 solution:
res_m20_n5 <- stats::optim(par = lf_m20
$x0(n = 5), fn = lf_m20$fn, gr =
lf_m10$gr, method = "L-BFGS-B")


jlmelville/funconstrain documentation built on April 17, 2024, 7:47 p.m.