analyze_convergence: Analyze convergence of Lambert W estimators

View source: R/analyze_convergence.R

analyze_convergenceR Documentation

Analyze convergence of Lambert W estimators

Description

Analyzes the feasibility of a Lambert W x F distribution for a given dataset based on bootstrapping. In particular it checks whether parameter estimates support the hypothesis that the data indeed follows a Lambert W x F distribution with finite mean and variance of the input distribution, which is an implicit assumption of Lambert W x F random variables in Goerg (2011).

See Goerg (2016) for an alternative definition that does not rely on fnite second order moments (set use.mean.variance = FALSE to use that type of Lambert W \times F distributions).

Usage

analyze_convergence(
  LambertW_fit,
  sample.sizes = round(seq(0.2, 1, length = 5) * length(LambertW_fit$data)),
  ...
)

## S3 method for class 'convergence_LambertW_fit'
summary(object, type = c("basic", "norm", "perc", "bca"), ...)

## S3 method for class 'convergence_LambertW_fit'
plot(x, ...)

Arguments

LambertW_fit, object, x

an object of class "LambertW_fit" with an IGMM or MLE_LambertW estimate.

sample.sizes

sample sizes for several steps of the convergence analysis. By default, one of them equals the length of the original data, which leads to improved plots (see plot.convergence_LambertW_fit); it is not necessary, though.

...

additional arguments passed to bootstrap or boot.ci in boot package.

type

type of confidence interval from bootstrap estimates. Passes this argument along to boot.ci. However, contrary to the type argument in boot.ci, the summary function can only take one of c("basic", "norm", "perc", "bca"). See boot.ci for details.

Details

Stehlik and Hermann (2015) show that when researchers use the IGMM algorithm outlined in Goerg (2011) erroneously on data that does not have finite input variance (and hence mean), the algorithm estimates do not converge.

In practice, researchers should of course first check if a given model is appropriate for their data-generating process. Since original Lambert W x F distributions assume that mean and variance are finite, it is not a given that for a specific dataset the Lambert W x F setting makes sense.

The bootstrap analysis reverses Stehlik and Hermann's argument and checks whether the IGMM estimates \lbrace \hat{\tau}^{(n)} \rbrace_{n} converge for increasing (bootstrapped) sample size n: if they do, then modeling the data with a Lambert W x F distribution is appropriate; if estimates do not converge, then this indicates that the input data is too heavy tailed for a classic skewed location-scale Lambert W x F framework. In this case, take a look at (double-)heavy tailed Lambert W x F distributions (type = 'hh') or unrestricted location-scale Lambert W x F distributions (use.mean.variance = FALSE). For details see Goerg (2016).

References

Stehlik and Hermann (2015). “Letter to the Editor”. Ann. Appl. Stat. 9 2051. doi:10.1214/15-AOAS864 – https://projecteuclid.org/euclid.aoas/1453994190

Examples

## Not run: 

sim.data <- list("Lambert W x Gaussian" = 
                    rLambertW(n = 100, distname = "normal", 
                              theta = list(gamma = 0.1, beta = c(1, 2))),
                 "Cauchy" = rcauchy(n = 100))
# do not use lapply() as it does not work well with match.call() in
# bootstrap()
igmm.ests <- list()
conv.analyses <- list()
for (nn in names(sim.data)) {
  igmm.ests[[nn]] <- IGMM(sim.data[[nn]], type = "s")
  conv.analyses[[nn]] <- analyze_convergence(igmm.ests[[nn]])
}
plot.lists <- lapply(conv.analyses, plot)
for (nn in names(plot.lists)) {
  plot.lists[[nn]] <- lapply(plot.lists[[nn]], "+", ggtitle(nn))
}

require(gridExtra)
for (jj in seq_along(plot.lists[[1]])) {
  grid.arrange(plot.lists[[1]][[jj]], plot.lists[[2]][[jj]], ncol = 2)
}

## End(Not run)


LambertW documentation built on May 29, 2024, 4:30 a.m.