sumt | R Documentation |
Sequentially Unconstrained Maximization Technique (SUMT) based optimization for linear equality constraints.
This implementation is primarily intended to be called from other
maximization routines, such as maxNR
.
sumt(fn, grad=NULL, hess=NULL,
start,
maxRoutine, constraints,
SUMTTol = sqrt(.Machine$double.eps),
SUMTPenaltyTol = sqrt(.Machine$double.eps),
SUMTQ = 10,
SUMTRho0 = NULL,
printLevel=print.level, print.level = 0, SUMTMaxIter = 100, ...)
fn |
function of a (single) vector parameter. The function may have more arguments (passed by ...), but those are not treated as the parameter. |
grad |
gradient function of |
hess |
function, Hessian of the |
start |
numeric, initial value of the parameter |
maxRoutine |
maximization algorithm, such as |
constraints |
list, information for constrained maximization.
Currently two components are supported: |
SUMTTol |
stopping condition. If the estimates at successive outer iterations are close enough, i.e. maximum of the absolute value over the component difference is smaller than SUMTTol, the algorithm stops. Note this does not necessarily mean that the constraints are satisfied. If the penalty function is too “weak”, SUMT may repeatedly find the same optimum. In that case a warning is issued. The user may set SUMTTol to a lower value, e.g. to zero. |
SUMTPenaltyTol |
stopping condition. If the barrier value (also called penalty)
|
SUMTQ |
a double greater than one, controlling the growth of the |
SUMTRho0 |
Initial value for One should consider supplying |
printLevel |
Integer, debugging information. Larger number prints more details. |
print.level |
same as ‘printLevel’, for backward compatibility |
SUMTMaxIter |
Maximum SUMT iterations |
... |
Other arguments to |
The Sequential Unconstrained Minimization Technique is a heuristic
for constrained optimization. To minimize a function f
subject to
constraints, it uses a non-negative penalty function P
,
such that P(x)
is zero
iff x
satisfies the constraints. One iteratively minimizes
f(x) + \varrho_k P(x)
, where the
\varrho
values are increased according to the rule
\varrho_{k+1} = q \varrho_k
for some
constant q > 1
, until convergence is
achieved in the sense that the barrier value
P(x)'P(x)
is close to zero. Note that there is no
guarantee that the global constrained optimum is
found. Standard practice recommends to use the best solution
found in “sufficiently many” replications.
Any of
the maximization algorithms in the maxLik, such as
maxNR
, can be used for the unconstrained step.
Analytic gradient and hessian are used if provided.
Object of class 'maxim'. In addition, a component
constraints |
A list, describing the constrained optimization. Includes the following components:
|
In case of equality constraints, it may be more efficient to enclose the function in a wrapper function. The wrapper calculates full set of parameters based on a smaller set of parameters, and the constraints.
Ott Toomet, Arne Henningsen
sumt
in package clue.
## We maximize exp(-x^2 - y^2) where x+y = 1
hatf <- function(theta) {
x <- theta[1]
y <- theta[2]
exp(-(x^2 + y^2))
## Note: you may prefer exp(- theta %*% theta) instead
}
## use constraints: x + y = 1
A <- matrix(c(1, 1), 1, 2)
B <- -1
res <- sumt(hatf, start=c(0,0), maxRoutine=maxNR,
constraints=list(eqA=A, eqB=B))
print(summary(res))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.