MoMmb: Method of Moments Parameter Estimation for the MBBEFD...

mommbR Documentation

Method of Moments Parameter Estimation for the MBBEFD distribution

Description

Attempts to find the best g and b parameters which are consistent with the first and second moments of the supplied data.

Usage

mommb(x, m = FALSE, tol = NULL, na.rm = TRUE, opts = list())

Arguments

x

numeric; If m is FALSE, a vector of observations between 0 and 1. If m is TRUE, then a vector of length 2, where the first element is the first central moment (mean) of the MBBEFD distribution and the second element is the second central moment (variance) of the MBBEFD distribution.

m

logical; When FALSE—the default—x is treated as a vector of observations. When TRUE, x is treated as the couplet of the distribution's first two central moments— E[X] and Var[X].

tol

numeric; tolerance of the expectation-maximization algorithm. If too tight, algorithm may fail. Defaults to the square root of .Machine$double.eps or roughly 1.49\times 10^{-8}—see Details.

na.rm

logical; if TRUE (default) NAs are removed. If FALSE, and there are NAs, the algorithm will stop with an error.

opts

list; Configuration options including:

  • alg: character; either "EM", the expectation-maximization algorithm of the package author or "LS", a form of the grid-search algorithm of Bernegger (2026), implemented as a nested set of 1D line-search optimizations.

  • maxit: integer; maximum number of iterations for the EM algorithm. Ignored for the LS algorithm.

  • maxb: numeric; the upper bound of the b parameter for fitting purposes. Used in both algorithms. Defaults to 1e6.

  • minb: numeric; the lower bound of the b parameter for fitting purposes. Only used in LS algorithm. Must be positive and less than maxb. Defaults to 1e-10.

  • maxg: numeric; the upper bound of the g parameter for fitting purposes. Only used in LS algorithm. Must be positive. Defaults to 1e6.

  • ming: numeric; the lower bound of the g parameter for fitting purposes. Only used in LS algorithm. Must be strictly greater than 1 and less than maxg. Defaults to 1 + 1e-10.

  • trace: logical; if TRUE, the EM algorithm will print the values of g and b at each iteration i. Ignored with a message for the LS algorithm. The default is FALSE.

Details

There are two fitting algorithms.

Expectation-Maximization

The default is an expectation-maximization form based on sections 4.1 and 4.2 of Bernegger (1997). With rare exceptions, the fitted g and b parameters must conform to:

\mu = \frac{\ln(gb)(1-b)}{\ln(b)(1-gb)}

subject to:

\mu^2 \le E[x^2]\le\mu\\ p\le E[x^2]

where \mu and \mu^2 are the “true” first and second raw moments, E[x^2] is the empirical second raw moment, and p is the mass point probability of a maximal loss: 1 - F(1^{-}).

The algorithm starts with the estimate p = E[x^2] as an upper bound. However, in step 2 of section 4.2, the p component is estimated as the difference between the numerical integration of x^2 f(x) and the empirical second moment—p = E[x^2] - \int x^2 f(x) dx—as seen in equation (4.3). This is converted to g by reciprocation and convergence is tested by the difference between this new g and its prior value. If the new p \le 0, the algorithm stops with an error.

Line Search

Bernegger (2026) in Algorithm 3 (Appendix C) describes a grid-search algorithm for converging on g and b. The original algorithm looks for the sets of parameters which return the appropriate mean and coefficient of variation (CV), the latter of which can be expressed in closed form using the dilogarithm function. However, instead of the 50,000 point grid suggested in the paper, this package implements the algorithm as a nested set of calls to the one-dimensional optimization algorithm of optimize. This is significantly faster and has the benefit of returning a value even when no zeros can be found. The algorithm basically defaults to the ranges suggested in the paper, but these may be passed by the user in the opts list. Sometimes, the algorithm converges to the upper bound of g. In this case, the package implementation will try once more using the square root of maxg instead. This can allow convergence—often to the same point of the EM algorithm, given that converged. When this happens, the iter value will be 2 instead of 1.

Value

Returns a list containing:

g

The fitted g parameter.

b

The fitted b parameter.

iter

For the EM algorithm, the number of iterations used. For the LS algorithm, the number of attempts (1 or 2 if the retry was needed; see Details).

sqerr

The squared error between the empirical mean and the theoretical mean given the fitted g and b. This does not make sense for some of the special branch cases, like b = 0,\; g = 1, etc.

Note

Anecdotal evidence indicates that parameter estimates from either fitting algorithm can be volatile when sample sizes are small (fewer than a few hundred observations).

Author(s)

Avraham Adler Avraham.Adler@gmail.com

References

Bernegger, Stefan. (1997) The Swiss Re Exposure Curves and the MBBEFD Distribution Class. ASTIN Bulletin 27(1), 99–111. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.2143/AST.27.1.563208")}

Bernegger, Stefan. (2026) Properties of the MBBEFD Distribution Classes. https://www.researchgate.net/publication/400516019_Properties_of_the_MBBEF_D_Distribution_Classes

See Also

rmb for random variate generation.

Examples

set.seed(85L)
x <- rmb(1000, 25, 4)
mommb(x)
mommb(x, opts = list(alg = "LS"))

MBBEFDLite documentation built on March 10, 2026, 9:07 a.m.