lseq: Generates a sequence of tuning parameters on the log scale

lseqR Documentation

Generates a sequence of tuning parameters on the log scale

Description

Generates a sequence of tuning parameters \lambda that are equally spaced on the log-scale. It may be used as part of constructing a solution path for the main fitting function rpql.

Usage

lseq(from, to, length, decreasing = FALSE)
  

Arguments

from

The minimum tuning parameter to start the sequence from.

to

The maximum tuning parameter to go to.

length

The length of the sequence.

decreasing

Should the sequence be in ascending or descending order?

Details

For joint selection of fixed and random effects in GLMMs, regularized PQL (Hui et al., 2016) works taking the penalized quasi-likelihood (PQL, Breslow and Clayton, 1993) as a loss function, and then sticking on some penalties in order to model variable. The penalties will depend upon one or more tuning parameters \lambda > 0, and the typical way this is chosen is to construct a sequence of \lambda values, fit the regularized PQL to each one value, and then use a method like information criterion to select the best \lambda and hence the best model. Please see the help file for rpql for more details, and glmnet (Friedman et al., 2010) and ncvreg (Breheny, and Huang, 2011) as examples of other packages that do penalized regression and involve tuning parameter selection.

The idea of equally spacing the sequence of \lambda's on the log (base 10) scale may not necessary be what you want to do, and one is free to use the standard seq() function for constructing sequences. By equaling spacing them on log-scale, it means that there will be a large concentration of small tuning parameter values, with less large tuning parameter values (analogous to a right skewed distribution). This may be useful if you believe the that most of the penalization/variable selection action takes place on smaller values of \lambda.

It is somewhat of an art form to construct a good sequence of tuning parameter values: the smallest \lambda should produce the saturated model if possible, and the largest \lambda should shrink most if not all covariates to zero i.e., the null model. Good luck!

Value

A sequence of tuning parameter values of length equal to length.

Author(s)

Francis K.C. Hui <francis.hui@gmail.com>, with contributions from Samuel Mueller <samuel.mueller@sydney.edu.au> and A.H. Welsh <Alan.Welsh@anu.edu.au>

Maintainer: Francis Hui <fhui28@gmail.com>

References

  • Breheny, P. and Huang, J. (2011) Coordinate descent algorithms fof nonconvex penalized regression, with applications to biological feature selection. The Annals of Appliedv Statistics, 5, 232-253.

  • Breslow, N. E., and Clayton, D. G. (1993). Approximate inference in generalized linear mixed models. Journal of the American Statistical Association, 88, 9-25.

  • Friedman, J., Hastie T., and Tibshirani, R. (2010). Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, 33, 1-22. URL: http://www.jstatsoft.org/v33/i01/.

  • Hui, F.K.C., Mueller, S., and Welsh, A.H. (2016). Joint Selection in Mixed Models using Regularized PQL. Journal of the American Statistical Association: accepted for publication.

See Also

rpql for fitting and performing model selection in GLMMs using regularized PQL.

Examples

## Please see examples in help file for the rpql function

rpql documentation built on Aug. 20, 2023, 1:08 a.m.