# elnorm3: Estimate Parameters of a Three-Parameter Lognormal...

### Description

Estimate the mean, standard deviation, and threshold parameters for a three-parameter lognormal distribution, and optionally construct a confidence interval for the threshold or the median of the distribution.

### Usage

 1 2 3  elnorm3(x, method = "lmle", ci = FALSE, ci.parameter = "threshold", ci.method = "avar", ci.type = "two-sided", conf.level = 0.95, threshold.lb.sd = 100) 

### Arguments

 x numeric vector of observations. method character string specifying the method of estimation. Possible values are "lmle" (local maximum likelihood; the default), "mme" (method of moments), "mmue" (method of moments using an unbaised estimate of variance), "mmme" (modified method of moments due to Cohen and Whitten (1980)), "zero.skew" (zero-skewness estimator due to Griffiths (1980)), and "royston.skew" (estimator based on Royston's (1992b) index of skewness). See the DETAILS section for more information on these estimation methods. ci logical scalar indicating whether to compute a confidence interval for either the threshold or median of the distribution. The default value is FALSE. ci.parameter character string indicating the parameter for which the confidence interval is desired. The possible values are "threshold" (the default) and "median". This argument is ignored if ci=FALSE. ci.method character string indicating the method to use to construct the confidence interval. The possible values are "avar" (asymptotic variance; the default), "likelihood.profile", and "skewness" (method suggested by Royston (1992b) for method="zero.skew"). This argument is ignored if ci=FALSE. ci.type character string indicating what kind of confidence interval to compute. The possible values are "two-sided" (the default), "lower", and "upper". This argument is ignored if ci=FALSE. conf.level a scalar between 0 and 1 indicating the confidence level of the confidence interval. The default value is conf.level=0.95. This argument is ignored if ci=FALSE. threshold.lb.sd a positive numeric scalar specifying the range over which to look for the local maximum likelihood (method="lmle") or zero-skewness (method="zero.skewness") estimator of threshold. The range is set to [ mean(x) - threshold.lb.sd * sd(x), min(x) ]. If you receive a warning message that elnorm3 is unable to find an acceptable estimate of threshold in this range, it may be because of convergence problems specific to the data in x. When this occurs, try changing the value of threshold.lb.sd. This same range is used in constructing confidence intervals for the threshold parameter. The default value is threshold.lb.sd=100. This argument is relevant only if method="lmle", method="zero.skew", ci.method="likelihood.profile", and/or ci.method="skewness".

### Details

If x contains any missing (NA), undefined (NaN) or infinite (Inf, -Inf) values, they will be removed prior to performing the estimation.

Let X denote a random variable from a three-parameter lognormal distribution with parameters meanlog=μ, sdlog=σ, and threshold=γ. Let \underline{x} denote a vector of n observations from this distribution. Furthermore, let x_{(i)} denote the i'th order statistic in the sample, so that x_{(1)} denotes the smallest value and x_{(n)} denote the largest value in \underline{x}. Finally, denote the sample mean and variance by:

\bar{x} = \frac{1}{n} ∑_{i=1}^n x_i \;\;\;\; (1)

s^2 = \frac{1}{n-1} ∑_{i=1}^n (x_i - \bar{x})^2 \;\;\;\; (2)

Note that the sample variance is the unbiased version. Denote the method of moments estimator of variance by:

s^2_m = \frac{1}{n} ∑_{i=1}^n (x_i - \bar{x})^2 \;\;\;\; (3)

Estimation

Local Maximum Likelihood Estimation (method="lmle")
Hill (1963) showed that the likelihood function approaches infinity as γ approaches x_{(1)}, so that the global maximum likelihood estimators of (μ, σ, γ) are (-∞, ∞, x_{(1)}), which are inadmissible, since γ must be smaller than x_{(1)}. Cohen (1951) suggested using local maximum likelihood estimators (lmle's), derived by equating partial derivatives of the log-likelihood function to zero. These estimators were studied by Harter and Moore (1966), Calitz (1973), Cohen and Whitten (1980), and Griffiths (1980), and appear to possess most of the desirable properties ordinarily associated with maximum likelihood estimators.

Cohen (1951) showed that the lmle of γ is given by the solution to the following equation:

[∑_{i=1}^n \frac{1}{w_i}] \, \{∑_{i=1}^n y_i - ∑_{i=1}^n y_i^2 + \frac{1}{n}[∑_{i=1}^n y_i]^2 \} - n ∑_{i=1}^n \frac{y_i}{w_i} = 0 \;\;\;\; (4)

where

w_i = x_i - \hat{γ} \;\;\;\; (5)

y_i = log(x_i - \hat{γ}) = log(w_i) \;\;\;\; (6)

and that the lmle's of μ and σ then follow as:

\hat{μ} = \frac{1}{n} ∑_{i=1}^n y_i = \bar{y} \;\;\;\; (7)

\hat{σ}^2 = \frac{1}{n} ∑_{i=1}^n (y_i - \bar{y})^2 \;\;\;\; (8)

Unfortunately, while equation (4) simplifies the task of computing the lmle's, for certain data sets there still may be convergence problems (Calitz, 1973), and occasionally multiple roots of equation (4) may exist. When multiple roots to equation (4) exisit, Cohen and Whitten (1980) recommend using the one that results in closest agreement between the mle of μ (equation (7)) and the sample mean (equation (1)).

On the other hand, Griffiths (1980) showed that for a given value of the threshold parameter γ, the maximized value of the log-likelihood (the “profile likelihood” for γ) is given by:

log[L(γ)] = \frac{-n}{2} [1 + log(2π) + 2\hat{μ} + log(\hat{σ}^2) ] \;\;\;\; (9)

where the estimates of μ and σ are defined in equations (7) and (8), so the lmle of γ reduces to an iterative search over the values of γ. Griffiths (1980) noted that the distribution of the lmle of γ is far from normal and that log[L(γ)] is not quadratic near the lmle of γ. He suggested a better parameterization based on

η = -log(x_{(1)} - γ) \;\;\;\; (10)

Thus, once the lmle of η is found using equations (9) and (10), the lmle of γ is given by:

\hat{γ} = x_{(1)} - exp(-\hat{η}) \;\;\;\; (11)

When method="lmle", the function elnorm3 uses the function nlminb to search for the minimum of -2log[L(η)], using the modified method of moments estimator (method="mmme"; see below) as the starting value for γ. Equation (11) is then used to solve for the lmle of γ, and equation (4) is used to “fine tune” the estimated value of γ. The lmle's of μ and σ are then computed using equations (6)-(8).

Method of Moments Estimation (method="mme")
Denote the r'th sample central moment by:

m_r = \frac{1}{n} ∑_{i=1}^n (x_i - \bar{x})^r \;\;\;\; (12)

and note that

s^2_m = m_2 \;\;\;\; (13)

Equating the sample first moment (the sample mean) with its population value (the population mean), and equating the second and third sample central moments with their population values yields (Johnson et al., 1994, p.228):

\bar{x} = γ + β √{ω} \;\;\;\; (14)

m_2 = s^2_m = β^2 ω (ω - 1) \;\;\;\; (15)

m_3 = β^3 ω^{3/2} (ω - 1)^2 (ω + 2) \;\;\;\; (16)

where

β = exp(μ) \;\;\;\; (17)

ω = exp(σ^2) \;\;\;\; (18)

Combining equations (15) and (16) yields:

b_1 = \frac{m_3}{m_2^{3/2}} = (ω + 2) √{ω - 1} \;\;\;\; (19)

The quantity on the left-hand side of equation (19) is the usual estimator of skewness. Solving equation (19) for ω yields:

\hat{ω} = (d + h)^{1/3} + (d - h)^{1/3} - 1 \;\;\;\; (20)

where

d = 1 + \frac{b_1}{2} \;\;\;\; (21)

h = sqrt{d^2 - 1} \;\;\;\; (22)

Using equation (18), the method of moments estimator of σ is then computed as:

\hat{σ}^2 = log(\hat{ω}) \;\;\;\; (23)

Combining equations (15) and (17), the method of moments estimator of μ is computed as:

\hat{μ} = \frac{1}{2} log[\frac{s^2_m}{\hat{omega}(\hat{ω} - 1)}] \;\;\;\; (24)

Finally, using equations (14), (17), and (18), the method of moments estimator of γ is computed as:

\bar{x} - exp(\hat{mu} + \frac{\hat{σ}^2}{2}) \;\;\;\; (25)

There are two major problems with using method of moments estimators for the three-parameter lognormal distribution. First, they are subject to very large sampling error due to the use of second and third sample moments (Cohen, 1988, p.121; Johnson et al., 1994, p.228). Second, Heyde (1963) showed that the lognormal distribution is not uniquely determined by its moments.

Method of Moments Estimators Using an Unbiased Estimate of Variance (method="mmue")
This method of estimation is exactly the same as the method of moments (method="mme"), except that the unbiased estimator of variance (equation (3)) is used in place of the method of moments one (equation (4)). This modification is given in Cohen (1988, pp.119-120).

Modified Method of Moments Estimation (method="mmme")
This method of estimation is described by Cohen (1988, pp.125-132). It was introduced by Cohen and Whitten (1980; their MME-II with r=1) and was further investigated by Cohen et al. (1985). It is motivated by the fact that the first order statistic in the sample, x_{(1)}, contains more information about the threshold parameter γ than any other observation and often more information than all of the other observations combined (Cohen, 1988, p.125).

The first two sets of equations are the same as for the modified method of moments estimators (method="mmme"), i.e., equations (14) and (15) with the unbiased estimator of variance (equation (3)) used in place of the method of moments one (equation (4)). The third equation replaces equation (16) by equating a function of the first order statistic with its expected value:

log(x_{(1)} - γ) = μ + σ E[Z_{(1,n)}] \;\;\;\; (26)

where E[Z_{(i,n)}] denotes the expected value of the i'th order statistic in a random sample of n observations from a standard normal distribution. (See the help file for evNormOrdStats for information on how E[Z_{(i,n)}] is computed.) Using equations (17) and (18), equation (26) can be rewritten as:

x_{(1)} = γ + β exp\{√{log(ω)} \, E[Z_{(i,n)}] \} \;\;\;\; (27)

Combining equations (14), (15), (17), (18), and (27) yields the following equation for the estimate of ω:

\frac{s^2}{[\bar{x} - x_{(1)}]^2} = \frac{\hat{ω}(\hat{ω} - 1)}{[√{\hat{ω}} - exp\{√{log(ω)} \, E[Z_{(i,n)}] \} ]^2} \;\;\;\; (28)

After equation (28) is solved for \hat{ω}, the estimate of σ is again computed using equation (23), and the estimate of μ is computed using equation (24), where the unbiased estimate of variaince is used in place of the biased one (just as for method="mmue").

Zero-Skewness Estimation (method="zero.skew")
This method of estimation was introduced by Griffiths (1980), and elaborated upon by Royston (1992b). The idea is that if the threshold parameter γ were known, then the distribution of:

Y = log(X - γ) \;\;\;\; (29)

is normal, so the skew of Y is 0. Thus, the threshold parameter γ is estimated as that value that forces the sample skew (defined in equation (19)) of the observations defined in equation (6) to be 0. That is, the zero-skewness estimator of γ is the value that satisfies the following equation:

0 = \frac{\frac{1}{n} ∑_{i=1}^n (y_i - \bar{y})^3}{[\frac{1}{n} ∑_{i=1}^n (y_i - \bar{y})^2]^{3/2}} \;\;\;\; (30)

where

y_i = log(x_i - \hat{γ}) \;\;\;\; (31)

Note that since the denominator in equation (30) is always positive (assuming there are at least two unique values in \underline{x}), only the numerator needs to be used to determine the value of \hat{γ}.

Once the value of \hat{γ} has been determined, μ and σ are estimated using equations (7) and (8), except the unbiased estimator of variance is used in equation (8).

Royston (1992b) developed a modification of the Shaprio-Wilk goodness-of-fit test for normality based on tranforming the data using equation (6) and the zero-skewness estimator of γ (see gofTest).

Estimators Based on Royston's Index of Skewness (method="royston.skew")
This method of estimation is discussed by Royston (1992b), and is similar to the zero-skewness method discussed above, except a different measure of skewness is used. Royston's (1992b) index of skewness is given by:

q = \frac{y_{(n)} - \tilde{y}}{\tilde{y} - y_{(1)}} \;\;\;\; (32)

where y_{(i)} denotes the i'th order statistic of y and y is defined in equation (31) above, and \tilde{y} denotes the median of y. Royston (1992b) shows that the value of γ that yields a value of q=0 is given by:

\hat{γ} = \frac{y_{(1)}y_{(n)} - \tilde{y}^2}{y_{(1)} + y_{(n)} - 2\tilde{y}} \;\;\;\; (33)

Again, as for the zero-skewness method, once the value of \hat{γ} has been determined, μ and σ are estimated using equations (7) and (8), except the unbiased estimator of variance is used in equation (8).

Royston (1992b) developed this estimator as a quick way to estimate γ.

Confidence Intervals
This section explains three different methods for constructing confidence intervals for the threshold parameter γ, or the median of the three-parameter lognormal distribution, which is given by:

Med[X] = γ + exp(μ) = γ + β \;\;\;\; (34)

Normal Approximation Based on Asymptotic Variances and Covariances (ci.method="avar")
Formulas for asymptotic variances and covariances for the three-parameter lognormal distribution, based on the information matrix, are given in Cohen (1951), Cohen and Whitten (1980), Cohen et al., (1985), and Cohen (1988). The relevant quantities for γ and the median are:

Var(\hat{γ}) = σ^2_{\hat{γ}} = \frac{σ^2}{n} \, (\frac{β^2}{ω}) H \;\;\;\; (35)

Var(\hat{β}) = σ^2_{\hat{β}} = \frac{σ^2}{n} \, β^2 (1 + H) \;\;\;\; (36)

Cov(\hat{γ}, \hat{β}) = σ_{\hat{γ}, \hat{β}} = \frac{-σ^3}{n} \, (\frac{β^2}{√{ω}}) H \;\;\;\; (37)

where

H = [ω (1 + σ^2) - 2σ^2 - 1]^{-1} \;\;\;\; (38)

A two-sided (1-α)100\% confidence interval for γ is computed as:

\hat{γ} - t_{n-2, 1-α/2} \hat{σ}_{\hat{γ}}, \, \hat{γ} + t_{n-2, 1-α/2} \hat{σ}_{\hat{γ}} \;\;\;\; (39)

where t_{ν, p} denotes the p'th quantile of Student's t-distribution with n degrees of freedom, and the quantity \hat{σ}_{\hat{γ}} is computed using equations (35) and (38) and substituting estimated values of β, ω, and σ. One-sided confidence intervals are computed in a similar manner.

A two-sided (1-α)100\% confidence interval for the median (see equation (34) above) is computed as:

\hat{γ} + \hat{β} - t_{n-2, 1-α/2} \hat{σ}_{\hat{γ} + \hat{β}}, \, \hat{γ} + \hat{β} + t_{n-2, 1-α/2} \hat{σ}_{\hat{γ} + \hat{β}} \;\;\;\; (40)

where

\hat{σ}^2_{\hat{γ} + \hat{β}} = \hat{σ}^2_{\hat{γ}} + \hat{σ}^2_{\hat{β}} + \hat{σ}_{\hat{γ}, \hat{β}} \;\;\;\; (41)

is computed using equations (35)-(38) and substituting estimated values of β, ω, and σ. One-sided confidence intervals are computed in a similar manner.

This method of constructing confidence intervals is analogous to using the Wald test (e.g., Silvey, 1975, pp.115-118) to test hypotheses on the parameters.

Because of the regularity problems associated with the global maximum likelihood estimators, it is questionble whether the asymptotic variances and covariances shown above apply to local maximum likelihood estimators. Simulation studies, however, have shown that these estimates of variance and covariance perform reasonably well (Harter and Moore, 1966; Cohen and Whitten, 1980).

Note that this method of constructing confidence intervals can be used with estimators other than the lmle's. Cohen and Whitten (1980) and Cohen et al. (1985) found that the asymptotic variances and covariances are reasonably close to corresponding simulated variances and covariances for the modified method of moments estimators (method="mmme").

Likelihood Profile (ci.method="likelihood.profile")
Griffiths (1980) suggested constructing confidence intervals for the threshold parameter γ based on the profile likelihood function given in equations (9) and (10). Royston (1992b) further elaborated upon this procedure. A two-sided (1-α)100\% confidence interval for η is constructed as:

[η_{LCL}, η_{UCL}] \;\;\;\; (42)

by finding the two values of η (one larger than the lmle of η and one smaller than the lmle of η) that satisfy:

log[L(η)] = log[L(\hat{η}_{lmle})] - \frac{1}{2} χ^2_{1, α/2} \;\;\;\; (43)

where χ^2_{ν, p} denotes the p'th quantile of the chi-square distribution with ν degrees of freedom. Once these values are found, the two-sided confidence for γ is computed as:

[γ_{LCL}, γ_{UCL}] \;\;\;\; (44)

where

γ_{LCL} = x_{(1)} - exp(-η_{LCL}) \;\;\;\; (45)

γ_{UCL} = x_{(1)} - exp(-η_{UCL}) \;\;\;\; (46)

One-sided intervals are construced in a similar manner.

This method of constructing confidence intervals is analogous to using the likelihood-ratio test (e.g., Silvey, 1975, pp.108-115) to test hypotheses on the parameters.

To construct a two-sided (1-α)100\% confidence interval for the median (see equation (34)), Royston (1992b) suggested the following procedure:

1. Construct a confidence interval for γ using the likelihood profile procedure.

2. Construct a confidence interval for β as:

[β_{LCL}, β_{UCL}] = [exp(\hat{μ} - t_{n-2, 1-α/2} \frac{\hat{σ}}{n}), \, exp(\hat{μ} + t_{n-2, 1-α/2} \frac{\hat{σ}}{n})] \;\;\;\; (47)

3. Construct the confidence interval for the median as:

[γ_{LCL} + β_{LCL}, γ_{UCL} + β_{UCL}] \;\;\;\; (48)

Royston (1992b) actually suggested using the quantile from the standard normal distribution instead of Student's t-distribution in step 2 above. The function elnorm3, however, uses the Student's t quantile.

Note that this method of constructing confidence intervals can be used with estimators other than the lmle's.

Royston's Confidence Interval Based on Significant Skewness (ci.method="skewness")
Royston (1992b) suggested constructing confidence intervals for the threshold parameter γ based on the idea behind the zero-skewness estimator (method="zero.skew"). A two-sided (1-α)100\% confidence interval for γ is constructed by finding the two values of γ that yield a p-value of α/2 for the test of zero-skewness on the observations \underline{y} defined in equation (6) (see gofTest). One-sided confidence intervals are constructed in a similar manner.

To construct (1-α)100\% confidence intervals for the median (see equation (34)), the exact same procedure is used as for ci.method="likelihood.profile", except that the confidence interval for γ is based on the zero-skewness method just described instead of the likelihood profile method.

### Value

a list of class "estimate" containing the estimated parameters and other information. See
estimate.object for details.

### Note

The problem of estimating the parameters of a three-parameter lognormal distribution has been extensively discussed by Aitchison and Brown (1957, Chapter 6), Calitz (1973), Cohen (1951), Cohen (1988), Cohen and Whitten (1980), Cohen et al. (1985), Griffiths (1980), Harter and Moore (1966), Hill (1963), and Royston (1992b). Stedinger (1980) and Hoshi et al. (1984) discuss fitting the three-parameter lognormal distribution to hydrologic data.

The global maximum likelihood estimates are inadmissible. In the past, several researchers have found that the local maximum likelihood estimates (lmle's) occasionally fail because of convergence problems, but they were not using the likelihood profile and reparameterization of Griffiths (1980). Cohen (1988) recommends the modified methods of moments estimators over lmle's because they are easy to compute, they are unbiased with respect to μ and σ^2 (the mean and standard deviation on the log-scale), their variances are minimal or near minimal, and they do not suffer from regularity problems.

Because the distribution of the lmle of the threshold parameter γ is far from normal for moderate sample sizes (Griffiths, 1980), it is questionable whether confidence intervals for γ or the median based on asymptotic variances and covariances will perform well. Cohen and Whitten (1980) and Cohen et al. (1985), however, found that the asymptotic variances and covariances are reasonably close to corresponding simulated variances and covariances for the modified method of moments estimators (method="mmme"). In a simulation study (5000 monte carlo trials), Royston (1992b) found that the coverage of confidence intervals for γ based on the likelihood profile (ci.method="likelihood.profile") was very close the nominal level (94.1% for a nominal level of 95%), although not symmetric. Royston (1992b) also found that the coverage of confidence intervals for γ based on the skewness method (ci.method="skewness") was also very close (95.4%) and symmetric.

### Author(s)

Steven P. Millard (EnvStats@ProbStatInfo.com)

### References

Aitchison, J., and J.A.C. Brown (1957). The Lognormal Distribution (with special references to its uses in economics). Cambridge University Press, London, Chapter 5.

Calitz, F. (1973). Maximum Likelihood Estimation of the Parameters of the Three-Parameter Lognormal Distribution–a Reconsideration. Australian Journal of Statistics 15(3), 185–190.

Cohen, A.C. (1951). Estimating Parameters of Logarithmic-Normal Distributions by Maximum Likelihood. Journal of the American Statistical Association 46, 206–212.

Cohen, A.C. (1988). Three-Parameter Estimation. In Crow, E.L., and K. Shimizu, eds. Lognormal Distributions: Theory and Applications. Marcel Dekker, New York, Chapter 4.

Cohen, A.C., and B.J. Whitten. (1980). Estimation in the Three-Parameter Lognormal Distribution. Journal of the American Statistical Association 75, 399–404.

Cohen, A.C., B.J. Whitten, and Y. Ding. (1985). Modified Moment Estimation for the Three-Parameter Lognormal Distribution. Journal of Quality Technology 17, 92–99.

Crow, E.L., and K. Shimizu. (1988). Lognormal Distributions: Theory and Applications. Marcel Dekker, New York, Chapter 2.

Griffiths, D.A. (1980). Interval Estimation for the Three-Parameter Lognormal Distribution via the Likelihood Function. Applied Statistics 29, 58–68.

Harter, H.L., and A.H. Moore. (1966). Local-Maximum-Likelihood Estimation of the Parameters of Three-Parameter Lognormal Populations from Complete and Censored Samples. Journal of the American Statistical Association 61, 842–851.

Heyde, C.C. (1963). On a Property of the Lognormal Distribution. Journal of the Royal Statistical Society, Series B 25, 392–393.

Hill, .B.M. (1963). The Three-Parameter Lognormal Distribution and Bayesian Analysis of a Point-Source Epidemic. Journal of the American Statistical Association 58, 72–84.

Hoshi, K., J.R. Stedinger, and J. Burges. (1984). Estimation of Log-Normal Quantiles: Monte Carlo Results and First-Order Approximations. Journal of Hydrology 71, 1–30.

Johnson, N. L., S. Kotz, and N. Balakrishnan. (1994). Continuous Univariate Distributions, Volume 1. Second Edition. John Wiley and Sons, New York.

Royston, J.P. (1992b). Estimation, Reference Ranges and Goodness of Fit for the Three-Parameter Log-Normal Distribution. Statistics in Medicine 11, 897–912.

Stedinger, J.R. (1980). Fitting Lognormal Distributions to Hydrologic Data. Water Resources Research 16(3), 481–490.

Lognormal3, Lognormal, LognormalAlt, Normal.

### Examples

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168  # Generate 20 observations from a 3-parameter lognormal distribution # with parameters meanlog=1.5, sdlog=1, and threshold=10, then use # Cohen and Whitten's (1980) modified moments estimators to estimate # the parameters, and construct a confidence interval for the # threshold based on the estimated asymptotic variance. # (Note: the call to set.seed simply allows you to reproduce this example.) set.seed(250) dat <- rlnorm3(20, meanlog = 1.5, sdlog = 1, threshold = 10) elnorm3(dat, method = "mmme", ci = TRUE) #Results of Distribution Parameter Estimation #-------------------------------------------- # #Assumed Distribution: 3-Parameter Lognormal # #Estimated Parameter(s): meanlog = 1.5206664 # sdlog = 0.5330974 # threshold = 9.6620403 # #Estimation Method: mmme # #Data: dat # #Sample Size: 20 # #Confidence Interval for: threshold # #Confidence Interval Method: Normal Approximation # Based on Asymptotic Variance # #Confidence Interval Type: two-sided # #Confidence Level: 95% # #Confidence Interval: LCL = 6.985258 # UCL = 12.338823 #---------- # Repeat the above example using the other methods of estimation # and compare. round(elnorm3(dat, "lmle")$parameters, 1) #meanlog sdlog threshold # 1.3 0.7 10.5 round(elnorm3(dat, "mme")$parameters, 1) #meanlog sdlog threshold # 2.1 0.3 6.0 round(elnorm3(dat, "mmue")$parameters, 1) #meanlog sdlog threshold # 2.2 0.3 5.8 round(elnorm3(dat, "mmme")$parameters, 1) #meanlog sdlog threshold # 1.5 0.5 9.7 round(elnorm3(dat, "zero.skew")$parameters, 1) #meanlog sdlog threshold # 1.3 0.6 10.3 round(elnorm3(dat, "royston")$parameters, 1) #meanlog sdlog threshold # 1.4 0.6 10.1 #---------- # Compare methods for computing a two-sided 95% confidence interval # for the threshold: # modified method of moments estimator using asymptotic variance, # lmle using asymptotic variance, # lmle using likelihood profile, and # zero-skewness estimator using the skewness method. elnorm3(dat, method = "mmme", ci = TRUE, ci.method = "avar")$interval$limits # LCL UCL # 6.985258 12.338823 elnorm3(dat, method = "lmle", ci = TRUE, ci.method = "avar")$interval$limits # LCL UCL # 9.017223 11.980107 elnorm3(dat, method = "lmle", ci = TRUE, ci.method="likelihood.profile")$interval$limits # LCL UCL # 3.699989 11.266029 elnorm3(dat, method = "zero.skew", ci = TRUE, ci.method = "skewness")$interval$limits # LCL UCL #-25.18851 11.18652 #---------- # Now construct a confidence interval for the median of the distribution # based on using the modified method of moments estimator for threshold # and the asymptotic variances and covariances. Note that the true median # is given by threshold + exp(meanlog) = 10 + exp(1.5) = 14.48169. elnorm3(dat, method = "mmme", ci = TRUE, ci.parameter = "median") #Results of Distribution Parameter Estimation #-------------------------------------------- # #Assumed Distribution: 3-Parameter Lognormal # #Estimated Parameter(s): meanlog = 1.5206664 # sdlog = 0.5330974 # threshold = 9.6620403 # #Estimation Method: mmme # #Data: dat # #Sample Size: 20 # #Confidence Interval for: median # #Confidence Interval Method: Normal Approximation # Based on Asymptotic Variance # #Confidence Interval Type: two-sided # #Confidence Level: 95% # #Confidence Interval: LCL = 11.20541 # UCL = 17.26922 #---------- # Compare methods for computing a two-sided 95% confidence interval # for the median: # modified method of moments estimator using asymptotic variance, # lmle using asymptotic variance, # lmle using likelihood profile, and # zero-skewness estimator using the skewness method. elnorm3(dat, method = "mmme", ci = TRUE, ci.parameter = "median", ci.method = "avar")$interval$limits # LCL UCL #11.20541 17.26922 elnorm3(dat, method = "lmle", ci = TRUE, ci.parameter = "median", ci.method = "avar")$interval$limits # LCL UCL #12.28326 15.87233 elnorm3(dat, method = "lmle", ci = TRUE, ci.parameter = "median", ci.method = "likelihood.profile")$interval$limits # LCL UCL # 6.314583 16.165525 elnorm3(dat, method = "zero.skew", ci = TRUE, ci.parameter = "median", ci.method = "skewness")$interval$limits # LCL UCL #-22.38322 16.33569 #---------- # Clean up #--------- rm(dat) 

Search within the EnvStats package
Search all R packages, documentation and source code

Questions? Problems? Suggestions? or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.