Equality-constrained optimization
sumt.RdSequentially Unconstrained Maximization Technique (SUMT) based optimization for linear equality constraints.
This implementation is primarily intended to be called from other
maximization routines, such as maxNR.
Arguments
- fn
function of a (single) vector parameter. The function may have more arguments (passed by ...), but those are not treated as the parameter.
- grad
gradient function of
fn. NULL if missing- hess
function, Hessian of the
fn. NULL if missing- start
numeric, initial value of the parameter
- maxRoutine
maximization algorithm, such as
maxNR- constraints
list, information for constrained maximization. Currently two components are supported:
eqAandeqBfor linear equality constraints: \(A \beta + B = 0\). The user must ensure that the matricesAandBare conformable.- SUMTTol
stopping condition. If the estimates at successive outer iterations are close enough, i.e. maximum of the absolute value over the component difference is smaller than SUMTTol, the algorithm stops.
Note this does not necessarily mean that the constraints are satisfied. If the penalty function is too “weak”, SUMT may repeatedly find the same optimum. In that case a warning is issued. The user may set SUMTTol to a lower value, e.g. to zero.
- SUMTPenaltyTol
stopping condition. If the barrier value (also called penalty) \((A \beta + B)'(A \beta + B)\) is less than
SUMTTol, the algorithm stops- SUMTQ
a double greater than one, controlling the growth of the
rhoas described in Details. Defaults to 10.- SUMTRho0
Initial value for
rho. If not specified, a (possibly) suitable value is selected. See Details.One should consider supplying
SUMTRho0in case where the unconstrained problem does not have a maximum, or the maximum is too far from the constrained value. Otherwise the authomatically selected value may not lead to convergence.- printLevel
Integer, debugging information. Larger number prints more details.
- print.level
same as ‘printLevel’, for backward compatibility
- SUMTMaxIter
Maximum SUMT iterations
- ...
Other arguments to
maxRoutineandfn.
Details
The Sequential Unconstrained Minimization Technique is a heuristic for constrained optimization. To minimize a function \(f\) subject to constraints, it uses a non-negative penalty function \(P\), such that \(P(x)\) is zero iff \(x\) satisfies the constraints. One iteratively minimizes \(f(x) + \varrho_k P(x)\), where the \(\varrho\) values are increased according to the rule \(\varrho_{k+1} = q \varrho_k\) for some constant \(q > 1\), until convergence is achieved in the sense that the barrier value \(P(x)'P(x)\) is close to zero. Note that there is no guarantee that the global constrained optimum is found. Standard practice recommends to use the best solution found in “sufficiently many” replications.
Any of
the maximization algorithms in the maxLik, such as
maxNR, can be used for the unconstrained step.
Analytic gradient and hessian are used if provided.
Value
Object of class 'maxim'. In addition, a component
- constraints
A list, describing the constrained optimization. Includes the following components:
- type
type of constrained optimization
- barrier.value
value of the penalty function at maximum
- code
code for the stopping condition
- message
a short message, describing the stopping condition
- outer.iterations
number of iterations in the SUMT step
Note
In case of equality constraints, it may be more efficient to enclose the function in a wrapper function. The wrapper calculates full set of parameters based on a smaller set of parameters, and the constraints.
See also
sumt in package clue.
Examples
## We maximize exp(-x^2 - y^2) where x+y = 1
hatf <- function(theta) {
x <- theta[1]
y <- theta[2]
exp(-(x^2 + y^2))
## Note: you may prefer exp(- theta %*% theta) instead
}
## use constraints: x + y = 1
A <- matrix(c(1, 1), 1, 2)
B <- -1
res <- sumt(hatf, start=c(0,0), maxRoutine=maxNR,
constraints=list(eqA=A, eqB=B))
print(summary(res))
#> --------------------------------------------
#> Newton-Raphson maximisation
#> Number of iterations: 1
#> Return code: 1
#> gradient close to zero (gradtol)
#> Function value: 0.6065399
#> Estimates:
#> estimate gradient
#> [1,] 0.4999848 -0.6065307
#> [2,] 0.4999848 -0.6065307
#>
#> Constrained optimization based on SUMT
#> Return code: 1
#> penalty close to zero
#> 5 outer iterations, barrier value 9.196992e-10
#> --------------------------------------------