mboAll: Efficient global optimization inclusive meta model validation

Description Usage Arguments Details Value Note Author(s) References See Also Examples

Description

Implements the efficient global optimization algorithm based on Kriging. It is a wrapper around mbo1d and additionally includes latin hypercube design generation and meta model validation of the Kriging model.

Usage

1
2
3
4
5
6
mboAll(loss_func, n_steps, initDesign, lower_bounds, 
upper_bounds, x_start, isoInput=FALSE, addInfo=TRUE, 
maxRunsMult=1, repMult=1, tol_input=.Machine$double.eps^0.25, 
envir=parent.frame(), metaModelSelect=TRUE, 
EIopt="1Dmulti", GenSAmaxCall=100, timeAlloc="constant",
EItype="EI")

Arguments

loss_func

Loss function to be minimized.

n_steps

Number of steps of the EGO algorithm.

initDesign

Number of initial design points to be evaluated. The higher the number, the more function evaluations of the loss function are required, but the approximation with Kriging will be more accurate.

lower_bounds

Vector of lower bounds of the tuning parameters. First element is the lower bound of the first tuning parameter, second element the lower bound of the next tuning parameter etc.

upper_bounds

Vector of upper bounds of the tuning parameters. First element is the upper bound of the first tuning parameter, second element the upper bound of the next tuning parameter etc.

x_start

Starting value of the one dimensional optimization algorithm. See optimize1dMulti.

isoInput

Force the covariance structure of the km to have only one range parameter. For details see km

addInfo

Should additional information be displayed during optimization? (logical value). Default is TRUE.

maxRunsMult

Multiplies the base number of iterations in the conditional optimization. Default is to use the number of hyperparameters. See optimize1dMulti.

repMult

Multiplies the base number of random starting values for helping to avoid local optima. Default is the number of hyperparameters. See optimize1dMulti.

tol_input

Convergence criteria of each one dimensional sub-optimization. Higher values will be more accurate, but require much more function evaluations. Default is the fourth root of the machine double accuracy. See optimize1dMulti.

envir

Internal variable to store environments. Default is to look up in a one level higher environment. Modification is unnecessary.

metaModelSelect

Should the covariance kernel of the Kriging model be automatically selected? (logical scalar) Default is TRUE and corresponds to the 5/2 materin covariance structure.

EIopt

Specifies which algorithm is used to optimize the expected improvement criterion. Two alternatives are available "1Dmulti" and "GenSA". The former uses the conditional 1D algorithm and the latter generalized, simulated annealing.

GenSAmaxCall

Maximum number of function calls per parameter to estimate in generalized, simulated annealing. Higher values result in more accurate estimates, but the optimization process is slowed.

timeAlloc

Specifies how the new noise variance is influenced by iteration progress. Default is to use "constant"" allocation. The other available option is to specify "zero", which corresponds to the original expected improvement criterion.

EItype

Defines the type of the improvement criterion. The default EI corresponds to the expected improvement. As an alternative EQI the expected quantile improvement is also possible.

Details

In addition to the function mbo1d a latin hypercube design will be generated. The design is generated by maximizing the minimum distance between the design points (maximin criteria). For reference see maximinLHS. Then the complete initial design is evaluated with the loss function. The model validation of Kriging searches for the best covariance kernel structure between five alternative specifications (see covTensorProduct-class). The performance is evaluated with a Gaussian likelihood with leave one out estimated expectations and variances of the Kriging model.

Value

List with following components:

Note

Function is supplied for model customization and intended for the experienced user. The more user friendly function tuneMboLevelCvKDSN uses this code as intermediate step.

Author(s)

Thomas Welchowski welchow@imbie.meb.uni-bonn.de

References

Michael Stein, (1987), Large Sample Properties of Simulations Using Latin Hypercube Sampling, Technometrics. 29, 143-151

Carl Edward Rasmussen and Christopher K. I. Williams, (2006), Gaussian Processes for Machine Learning, Massachusetts Institute of Technology

See Also

km, leaveOneOut.km, maximinLHS, tuneMboLevelCvKDSN, mbo1d

Examples

1
2
3
4
5
6
# Example with Branin function
library(globalOptTests)
tryBranin <- mboAll (loss_func=function (x) goTest(par=x, fnName="Branin", 
checkDim = FALSE), n_steps=5, initDesign=15, lower_bounds=c(-5, 0), 
upper_bounds=c(10, 15), x_start=c(5, -5))
abs(tryBranin$value-getGlobalOpt("Branin"))

kernDeepStackNet documentation built on May 2, 2019, 8:16 a.m.