Fits GLMs with random intercept by Maximum Likelihood and numerical integration via GaussHermite quadrature.
1 2 3 4 5 6  glmmML(formula, family = binomial, data, cluster, weights,
cluster.weights, subset, na.action,
offset, prior = c("gaussian", "logistic", "cauchy"),
start.coef = NULL, start.sigma = NULL, fix.sigma = FALSE, x = FALSE,
control = list(epsilon = 1e08, maxit = 200, trace = FALSE),
method = c("Laplace", "ghq"), n.points = 8, boot = 0)

formula 
a symbolic description of the model to be fit. The details of model specification are given below. 
family 
Currently, the only valid values are 
data 
an optional data frame containing the variables in the model. By default the variables are taken from ‘environment(formula)’, typically the environment from which ‘glmmML’ is called. 
cluster 
Factor indicating which items are correlated. 
weights 
Case weights. Defaults to one. 
cluster.weights 
Cluster weights. Defaults to one. 
subset 
an optional vector specifying a subset of observations to be used in the fitting process. 
na.action 
See glm. 
start.coef 
starting values for the parameters in the linear predictor. Defaults to zero. 
start.sigma 
starting value for the mixing standard deviation. Defaults to 0.5. 
fix.sigma 
Should sigma be fixed at start.sigma? 
x 
If TRUE, the design matrix is returned (as x). 
offset 
this can be used to specify an a priori known component to be included in the linear predictor during fitting. 
prior 
Which "prior" distribution (for the random effects)? Possible choices are "gaussian" (default), "logistic", and "cauchy". 
control 
Controls the convergence criteria. See

method 
There are two choices "Laplace" (default) and "ghq" (GaussHermite). 
n.points 
Number of points in the GaussHermite quadrature. If
n.points == 1, the GaussHermite is the same as Laplace
approximation. If 
boot 
Do you want a bootstrap estimate of cluster effect? The default
is No ( 
The integrals in the log likelihood function are evaluated by the Laplace approximation (default) or GaussHermite quadrature. The latter is now fully adaptive; however, only approximate estimates of variances are available for the GaussHermite (n.points > 1) method.
For the binomial families, the response can be a twocolumn matrix, see the help page for glm for details.
The return value is a list, an object of class 'glmmML'. The components are:
boot 
No. of boot replicates 
converged 
Logical 
coefficients 
Estimated regression coefficients 
coef.sd 
Their standard errors 
sigma 
The estimated random effects' standard deviation 
sigma.sd 
Its standard error 
variance 
The estimated variancecovariance matrix. The last
column/row corresponds to the standard
deviation of the random effects ( 
aic 
AIC 
bootP 
Bootstrap p value from testing the null hypothesis of no random effect (sigma = 0) 
deviance 
Deviance 
mixed 
Logical 
df.residual 
Degrees of freedom 
cluster.null.deviance 
Deviance from a glm with no
clustering. Subtracting 
cluster.null.df 
Its degrees of freedom 
posterior.modes 
Estimated posterior modes of the random effects 
terms 
The terms object 
info 
From hessian inversion. Should be 0. If not, no variances could be estimated. You could try fixing sigma at the estimated value and rerun. 
prior 
Which prior was used? 
call 
The function call 
x 
The design matrix if asked for, otherwise not present 
The optimization may not converge with
the default value of start.sigma
. In that case, try different
start values for sigma. If still no convergence, consider the
possibility to fix the value of sigma at several values and study the
profile likelihood.
Göran Broström
Broström, G. and Holmberg, H. (2011). Generalized linear models with clustered data: Fixed and random effects models. Computational Statistics and Data Analysis 55:31233134.
glmmboot
, glm
, optim
,
lmer
in Matrix
and
glmmPQL
in MASS
.
1 2 3 4 5 
Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
Please suggest features or report bugs with the GitHub issue tracker.
All documentation is copyright its authors; we didn't write any of that.