Description Usage Arguments Details Value Author(s) See Also Examples
The function either searches for a root of the quasi-score or minimizes one of the criterion functions.
1 2 3 |
x0 |
(named) numeric vector, the starting point |
qsd |
object of class |
method |
names of possible minimization routines (see details) |
opts |
list of control arguments for quasi-scoring iteration, see |
control |
list of control arguments passed to the auxiliary routines |
... |
further arguments passed to |
obs |
numeric vector of observed statistics, overwrites ' |
info |
additional information at found minimizer |
check |
logical, |
restart |
logical, |
pl |
numeric value (>=0), the print level |
verbose |
if |
The function provides an interface to local and global numerical minimization routines using the approximate quasi-deviance (QD) or Mahalanobis distance (MD) as an objective (monitor) function.
The function does not require additional simulations to find an approximate minimizer or root of the quasi-score. The numerical iterations always take place on the fast to evaluate criterion function approximations. The main purpose is to provide an entry point for minimization without the need of sampling new candidate points for evaluation. This is particularly useful if we search for a "first-shot" minimizer or to re-iterate a few further steps after estimation of the model parameter.
The criterion function is treated as a deterministic (non-random) function during minimization
(or root finding) whose surface depends on the sample points and chosen covariance models. Because of the typical nonconvex
nature of the criterion functions one cannot expect a global minimizer by applying any local search method like, for example,
the scoring iteration qscoring
. Therfore, if the quasi-scoring iteration or some other available method gets stuck
in a local minimum of the criterion function showing at least some kind of numerical convergence we use such minimizer as it is and
finish the search, possibly being unlucky, having not found an approximate root of the quasi-score vector (or minimum of the Mahalanobis distance).
If there is no obvious convergence or any error, the search is restarted by switching to the next user supplied minimization routine defined
in the vector of method names 'method
'.
Besides the quasi-scoring method, 'method
' equal to "qscoring
", the following
(derivative-free) routines from the nloptr
package are available for minimizing
both criterion functions:
bobyqa
, cobyla
and neldermead
direct
, global search with a locally biased version named directL
lbfgs
, for minimizing the MD with constant 'Sigma
' only
nloptr
, as the general optimizer, which allows to use further methods
Using quasi-scoring first, which is only valid for minimizing the QD, is always a good idea since we might have done
a good guess already being close to an approximate root. If this fails we switch to any of the above alternative methods
(e.g. bobyqa
as the default method) or eventually - in some real hard situations - to the
method 'direct
', if given, or its locally biased version 'directL
'. The order of processing is determined
by the order of appearance of the names in the argument 'method
'. Any method available from package 'nloptr
' can be
chosen. In particular, setting method="nloptr"
and defining 'control
' in an appropriate way allows to choose a multistart
algorithm such as mlsl
, see also multiSearch
for an alternative solution.
Only if there are reasonable arguments against quasi-scoring, such as expecting local minima rather than a root first or an available
limited computational budget, we can always apply the direct search method 'direct
' leading to a globally exhaustive search.
Note that we must always supply a starting point 'x0
', which could be any vector valued parameter of the parameter space unless
method 'direct
' is chosen. Then 'x0
' is still required but ignored as a starting point since it uses the "center point" of
the (hyper)box constraints internally. In addition, if cross-validation models 'cvm
' are given, the cross-validation prediction variances
are inherently used during consecutive iterations of all methods. This results in additional computational efforts due to the repeated evaluations
of the statistics to calculate these variances during each new iteration.
A list as follows
par |
solution vector |
value |
objective value |
method |
applied method |
convergence |
termination code |
score |
if applicable, quasi-score vector (or gradient of MD) |
M. Baaske
1 2 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.