sumt: Equality-constrained optimization

Description Usage Arguments Details Value Note Author(s) See Also Examples

View source: R/sumt.R

Description

Sequentially Unconstrained Maximization Technique (SUMT) based optimization for linear equality constraints.

This implementation is primarily intended to be called from other maximization routines, such as maxNR.

Usage

1
2
3
4
5
6
7
8
sumt(fn, grad=NULL, hess=NULL,
start,
maxRoutine, constraints,
SUMTTol = sqrt(.Machine$double.eps),
SUMTPenaltyTol = sqrt(.Machine$double.eps),
SUMTQ = 10,
SUMTRho0 = NULL,
printLevel=print.level, print.level = 0, SUMTMaxIter = 100, ...)

Arguments

fn

function of a (single) vector parameter. The function may have more arguments (passed by ...), but those are not treated as the parameter.

grad

gradient function of fn. NULL if missing

hess

function, Hessian of the fn. NULL if missing

start

numeric, initial value of the parameter

maxRoutine

maximization algorithm, such as maxNR

constraints

list, information for constrained maximization. Currently two components are supported: eqA and eqB for linear equality constraints: A %*% beta + B = 0. The user must ensure that the matrices A and B are conformable.

SUMTTol

stopping condition. If the estimates at successive outer iterations are close enough, i.e. maximum of the absolute value over the component difference is smaller than SUMTTol, the algorithm stops.

Note this does not necessarily mean that the constraints are satisfied. If the penalty function is too “weak”, SUMT may repeatedly find the same optimum. In that case a warning is issued. The user may set SUMTTol to a lower value, e.g. to zero.

SUMTPenaltyTol

stopping condition. If the barrier value (also called penalty) t(A %*% beta + B) %*% (A %*% beta + B) is less than SUMTTol, the algorithm stops

SUMTQ

a double greater than one, controlling the growth of the rho as described in Details. Defaults to 10.

SUMTRho0

Initial value for rho. If not specified, a (possibly) suitable value is selected. See Details.

One should consider supplying SUMTRho0 in case where the unconstrained problem does not have a maximum, or the maximum is too far from the constrained value. Otherwise the authomatically selected value may not lead to convergence.

printLevel

Integer, debugging information. Larger number prints more details.

print.level

same as ‘printLevel’, for backward compatibility

SUMTMaxIter

Maximum SUMT iterations

...

Other arguments to maxRoutine and fn.

Details

The Sequential Unconstrained Minimization Technique is a heuristic for constrained optimization. To minimize a function f subject to constraints, it uses a non-negative penalty function P, such that P(x) is zero iff x satisfies the constraints. One iteratively minimizes f(x) + rho_k P(x), where the rho values are increased according to the rule rho_{k+1} = q rho_k for some constant q > 1, until convergence is achieved in the sense that the barrier value P(x)'P(x) is close to zero. Note that there is no guarantee that the global constrained optimum is found. Standard practice recommends to use the best solution found in “sufficiently many” replications.

Any of the maximization algorithms in the maxLik, such as maxNR, can be used for the unconstrained step.

Analytic gradient and hessian are used if provided.

Value

Object of class 'maxim'. In addition, a component

constraints

A list, describing the constrained optimization. Includes the following components:

type

type of constrained optimization

barrier.value

value of the penalty function at maximum

code

code for the stopping condition

message

a short message, describing the stopping condition

outer.iterations

number of iterations in the SUMT step

Note

In case of equality constraints, it may be more efficient to enclose the function in a wrapper function. The wrapper calculates full set of parameters based on a smaller set of parameters, and the constraints.

Author(s)

Ott Toomet, Arne Henningsen

See Also

sumt in package clue.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
## We maximize exp(-x^2 - y^2) where x+y = 1
hatf <- function(theta) {
   x <- theta[1]
   y <- theta[2]
   exp(-(x^2 + y^2))
   ## Note: you may prefer exp(- theta %*% theta) instead
}
## use constraints: x + y = 1
A <- matrix(c(1, 1), 1, 2)
B <- -1
res <- sumt(hatf, start=c(0,0), maxRoutine=maxNR,
            constraints=list(eqA=A, eqB=B))
print(summary(res))

EBukin/maxLik-dev documentation built on May 6, 2019, 11:21 a.m.