| mommb | R Documentation |
Attempts to find the best g and b parameters which are consistent
with the first and second moments of the supplied data.
mommb(x, m = FALSE, tol = NULL, na.rm = TRUE, opts = list())
x |
numeric; If |
m |
logical; When |
tol |
numeric; tolerance of the expectation-maximization
algorithm. If too tight, algorithm may fail. Defaults to the square root of
|
na.rm |
logical; if |
opts |
list; Configuration options including:
|
There are two fitting algorithms.
The default is an expectation-maximization form based on sections 4.1 and 4.2 of
Bernegger (1997). With rare exceptions, the fitted g and b
parameters must conform to:
\mu = \frac{\ln(gb)(1-b)}{\ln(b)(1-gb)}
subject to:
\mu^2 \le E[x^2]\le\mu\\ p\le E[x^2]
where \mu and \mu^2 are the “true” first and second raw
moments, E[x^2] is the empirical second raw moment, and p is the
mass point probability of a maximal loss: 1 - F(1^{-}).
The algorithm starts with the estimate p = E[x^2] as an upper bound.
However, in step 2 of section 4.2, the p component is estimated as the
difference between the numerical integration of x^2 f(x) and the empirical
second moment—p = E[x^2] - \int x^2 f(x) dx—as seen in equation (4.3).
This is converted to g by reciprocation and convergence is tested by the
difference between this new g and its prior value. If the new
p \le 0, the algorithm stops with an error.
Bernegger (2026) in Algorithm 3 (Appendix C) describes a grid-search algorithm
for converging on g and b. The original algorithm looks for the sets
of parameters which return the appropriate mean and coefficient of variation
(CV), the latter of which can be expressed in closed form using the
dilogarithm function. However, instead of the 50,000 point grid suggested in the
paper, this package implements the algorithm as a nested set of calls to the
one-dimensional optimization algorithm of optimize. This is significantly
faster and has the benefit of returning a value even when no zeros can be found.
The algorithm basically defaults to the ranges suggested in the paper, but these
may be passed by the user in the opts list. Sometimes, the algorithm
converges to the upper bound of g. In this case, the package
implementation will try once more using the square root of maxg instead.
This can allow convergence—often to the same point of the EM
algorithm, given that converged. When this happens, the iter value will
be 2 instead of 1.
Returns a list containing:
g |
The fitted |
b |
The fitted |
iter |
For the EM algorithm, the number of iterations used. For the LS algorithm, the number of attempts (1 or 2 if the retry was needed; see Details). |
sqerr |
The squared error between the empirical mean and the
theoretical mean given the fitted |
Anecdotal evidence indicates that parameter estimates from either fitting algorithm can be volatile when sample sizes are small (fewer than a few hundred observations).
Avraham Adler Avraham.Adler@gmail.com
Bernegger, Stefan. (1997) The Swiss Re Exposure Curves and the MBBEFD Distribution Class. ASTIN Bulletin 27(1), 99–111. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.2143/AST.27.1.563208")}
Bernegger, Stefan. (2026) Properties of the MBBEFD Distribution Classes. https://www.researchgate.net/publication/400516019_Properties_of_the_MBBEF_D_Distribution_Classes
rmb for random variate generation.
set.seed(85L)
x <- rmb(1000, 25, 4)
mommb(x)
mommb(x, opts = list(alg = "LS"))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.