Description Usage Arguments Details Value References See Also
Evaluates whether a fitted mixed model is (almost / near) singular, i.e., the parameters are on the boundary of the feasible parameter space: variances of one or more linear combinations of effects are (close to) zero.
1  isSingular(x, tol = 1e4)

x 
a fitted 
tol 
numerical tolerance for detecting singularity. 
Complex mixedeffect models (i.e., those with a large number of variancecovariance parameters) frequently result in singular fits, i.e. estimated variancecovariance matrices with less than full rank. Less technically, this means that some "dimensions" of the variancecovariance matrix have been estimated as exactly zero. For scalar random effects such as interceptonly models, or 2dimensional random effects such as intercept+slope models, singularity is relatively easy to detect because it leads to randomeffect variance estimates of (nearly) zero, or estimates of correlations that are (almost) exactly 1 or 1. However, for more complex models (variancecovariance matrices of dimension >=3) singularity can be hard to detect; models can often be singular without any of their individual variances being close to zero or correlations being close to +/1.
This function performs a simple test to determine whether any of the
random effects covariance matrices of a fitted model are singular.
The rePCA
method provides more detail about the
singularity pattern, showing the standard deviations
of orthogonal variance components and the mapping from
variance terms in the model to orthogonal components
(i.e., eigenvector/rotation matrices).
While singular models are statistically well defined (it is theoretically sensible for the true maximum likelihood estimate to correspond to a singular fit), there are real concerns that (1) singular fits correspond to overfitted models that may have poor power; (2) chances of numerical problems and misconvergence are higher for singular models (e.g. it may be computationally difficult to compute profile confidence intervals for such models); (3) standard inferential procedures such as Wald statistics and likelihood ratio tests may be inappropriate.
There is not yet consensus about how to deal with singularity, or more generally to choose which randomeffects specification (from a range of choices of varying complexity) to use. Some proposals include:
avoid fitting overly complex models in the first place, i.e. design experiments/restrict models a priori such that the variancecovariance matrices can be estimated precisely enough to avoid singularity (Matuschek et al 2017)
use some form of model selection to choose a model that balances predictive accuracy and overfitting/type I error (Bates et al 2015, Matuschek et al 2017)
“keep it maximal”, i.e. fit the most complex model consistent with the experimental design, removing only terms required to allow a nonsingular fit (Barr et al. 2013), or removing further terms based on pvalues or AIC
use a partially Bayesian method that produces maximum a posteriori (MAP) estimates using regularizing priors to force the estimated randomeffects variancecovariance matrices away from singularity (Chung et al 2013, blme package)
use a fully Bayesian method that both regularizes the model via informative priors and gives estimates and credible intervals for all parameters that average over the uncertainty in the random effects parameters (Gelman and Hill 2006, McElreath 2015; MCMCglmm, rstanarm and brms packages)
a logical value
Dale J. Barr, Roger Levy, Christoph Scheepers, and Harry J. Tily (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal; Journal of Memory and Language 68(3), 255–278.
Douglas Bates, Reinhold Kliegl, Shravan Vasishth, and Harald Baayen (2015). Parsimonious Mixed Models; preprint (https://arxiv.org/abs/1506.04967).
Yeojin Chung, Sophia RabeHesketh, Vincent Dorie, Andrew Gelman, and Jingchen Liu (2013). A nondegenerate penalized likelihood estimator for variance parameters in multilevel models; Psychometrika 78, 685–709; doi: 10.1007/S1133601393282.
Andrew Gelman and Jennifer Hill (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press.
Hannes Matuschek, Reinhold Kliegl, Shravan Vasishth, Harald Baayen, and Douglas Bates (2017). Balancing type I error and power in linear mixed models. Journal of Memory and Language 94, 305–315.
Richard McElreath (2015) Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Chapman and Hall/CRC.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.