mletools: Overview of the "mle.tools" Package

Description Author(s) References

Description

The current version of the mle.tools package has implemented three functions which are of great interest in maximum likelihood estimation. These functions calculates the expected /observed Fisher information and the bias-corrected maximum likelihood estimate(s) using the bias formula introduced by Cox and Snell (1968). They can be applied to any probability density function whose terms are available in the derivatives table of D function (see “deriv.c” source code for further details). Integrals, when required, are computed numerically via integrate function. Below are some mathematical details of how the returned values are calculated.

Let X_{1},… ,X_{n} be i.i.d. random variables with probability density functions f(x_{i}\mid \bold{θ }) depending on a p-dimensional parameter vector \bold{θ } = (θ_1,…,θ_p). The (j,k)-th element of the observed, H_{jk}, and expected, I_{jk}, Fisher information are calculated, respectively, as

H_{jk} =≤ft. {-∑\limits_{i=1}^{n}\frac{% \partial ^{2}}{\partial θ _{j}\partial θ _{k}}\log f≤ft( x_{i}\mid {\bold{θ} }\right) }\right\vert _{\bold{θ }=\widehat{\bold{% θ }}}

and

I_{jk}=-n\times E≤ft( \frac{\partial ^{2}}{\partial θ _{j}\partial θ _{k}}\log f≤ft( x\mid \bold{θ }\right) \right) =≤ft. -n\times \int\limits_{\mathcal{X} }\frac{\partial ^{2}}{\partial θ _{j}\partial θ _{k}}\log f≤ft( x\mid \bold{θ }\right) \times f≤ft( x\mid \bold{θ }\right) dx\right\vert _{\bold{θ }=\widehat{\bold{% θ }}}

where (j,k=1,…,p), \bold{\widehat{θ}} is the maximum likelihood estimate of \bold{θ} and \mathcal{X} denotes the support of the random variable X.

The observed.varcov function returns the inputted maximum likelihood estimate(s) and the inverse of \bold{H} while the expected.varcov function returns the inputted maximum likelihood estimate(s) and the inverse of \bold{I}. If \bold{H} and/or \bold{I} are singular an error message is returned.

Furthermore, the bias corrected maximum likelihood estimate of θ_s (s=1,…,p), denoted by \widetilde{θ_s}, is calculated as \widetilde{θ_s} = \widehat{θ} - \widehat{Bias}(\widehat{θ}_s), where \widehat{θ}_s is the maximum likelihood estimate of {θ}_s and

{\widehat{Bias}≤ft( {\widehat{θ }}_{s}\right) =}≤ft. {% ∑\limits_{j=1}^{p}∑\limits_{k=1}^{p}∑\limits_{l=1}^{p}κ ^{sj}κ ^{kl}≤ft[ 0.5κ _{{jkl}}+κ _{{jk,l}}\right] }% \right\vert _{\bold{θ }=\widehat{\bold{θ }}}

where κ ^{jk} is the (j,k)-th element of the inverse of the expected Fisher information, {κ_{jkl}=} n\times E≤ft( \frac{\partial ^{3}}{\partial θ _{j}\partial {{θ}}_{k}{θ }_{l}}\log f≤ft( x\mid \bold{θ }\right) \right) and κ_{jk,l}= n \times E≤ft( \frac{\partial ^{2}}{\partial θ _{j}\partial θ_{k}}\log f≤ft( x\mid\bold{θ }\right) \times \frac{\partial }{{θ }_{l}}\log f≤ft( x\mid\bold{θ }\right) \right) .

The bias-corrected maximum likelihood estimate(s) and some other quantities are calculated via coxsnell.bc function. If the numerical integration fails and/or \bold{I} is singular an error message is returned.

It is noteworthy that for a series of probability distributions it is possible, after extensive algebra, to obtain the analytical expressions for Bias({\widehat{θ}_s)}. In Stosic and Cordeiro (2009) are the analytic expressions for 22 two-parameter continuous probability distributions. They also present the Maple and Mathematica scripts used to obtain all analytic expressions (see Cordeiro and Cribari-Neto 2014 for further details).

Author(s)

Josmar Mazucheli jmazucheli@gmail.com

References

Azzalini, A. (1996). Statistical Inference: Based on the Likelihood. London: Chapman and Hall.

Cordeiro, G. M. and Cribari-Neto, F., (2014). An introduction to Bartlett correction and bias reduction. SpringerBriefs in Statistics, New-York.

Cordeiro, G. M. and McCullagh, P., (1991). Bias correction in generalized linear models. Journal of the Royal Statistical Society, Series B, 53, 3, 629–643.

Cox, D. R. and Hinkley, D. V. (1974). Theoretical Statistics. London: Chapman and Hall.

Cox, D. R. and Snell, E. J., (1968). A general definition of residuals (with discussion). Journal of the Royal Statistical Society, Series B, 30, 2, 24–275.

Efron, B. and Hinkley, D. V. (1978). Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher information. Biometrika, 65, 3, 457–482.

Pawitan, Y. (2001). In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford: Oxford University Press.

Stosic, B. D. and Cordeiro, G. M., (2009). Using Maple and Mathematica to derive bias corrections for two parameter distributions. Journal of Statistical Computation and Simulation, 79, 6, 751–767.


mle.tools documentation built on May 1, 2019, 6:35 p.m.