knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )

```
library(varTestnlme)
```

The **varTesnlme** package is very easy to use. Below are small examples on how to run it for linear, generalized linear and nonlinear mixed-effect models.

Mixed-effect models can be run using nlme or lme4, but also using saemix. **varTestnlme** can be used to compare two nested models using likelihood ratio tests, where the variance of at least one random effect is tested equal to 0. Fixed effects can also be tested simultaneously, as well as covariances.

# Load the packages library(nlme) library(lme4) library(saemix) library(EnvStats)

Here we focus on models run using **lme4** and **nlme**, but **saemix** can also be used.

We illustrate the results on the **Orthodont** dataset, which is part of the **nlme** package. We are interested in modeling the distance between the pituitary and the pterygomaxillary fissure (in mm) as a function of age, in 27 children. We will fit a random slope and random intercept model, and test whether the slope is random or not.

We first need to fit the two nested models: the full model corresponding to $H_1$ and the null model corresponding to $H_0$, where there is no random effect associated to `age`

.

data("Orthodont") # using nlme, with correlated slope and intercept m1.nlme <- lme(distance ~ 1 + Sex + age + age*Sex, random = pdSymm(Subject ~ 1 + age), data = Orthodont, method = "ML") m0.nlme <- lme(distance ~ 1 + Sex, random = ~ 1 | Subject, data = Orthodont, method = "ML") # using lme4, with correlated slope and intercept m1.lme4 <- lmer(distance ~ 1 + Sex + age + age*Sex + (1 + age | Subject), data = Orthodont, REML = FALSE) m0.lme4 <- lmer(distance ~ 1 + Sex + (1 | Subject), data = Orthodont, REML = FALSE)

Now we can run the likelihood ratio test using the **varTestnlme** package. The function returns an object from S3 class `htest`

.

vt1.nlme <- varCompTest(m1.nlme,m0.nlme) vt1.lme4 <- varCompTest(m1.lme4,m0.lme4)

Using the `EnvStats`

package, nice printing options are available for objects of type `htest`

:

```
print(vt1.nlme)
```

It is also possible to access the components of the object using $ or [[:

vt1.nlme$statistic vt1.nlme$p.value

For the p-value, four different values are provided:

- the p-value computed using the chi-bar-square expression as a mixture of chi-square distributions. This requires that the weights of the chi-bar-square are available, either because their exact expression is known (only in some simple cases), or because the user ask for their approximation via the option
`pval.comp = "both"`

or`pval.comp = "approx"`

. - the p-value computed using a sample from the chi-bar-square distribution. This sample is used to approximate the chi-bar-square weights, and thus the corresponding p-value is only available when
`pval.comp = "both"`

or`pval.comp = "approx"`

. - the lower bound on the p-value: this is always available
- the upper bound on the p-value: this is always available

# using nlme, with uncorrelated slope and intercept m1diag.nlme <- lme(distance ~ 1 + Sex + age + age*Sex, random = pdDiag(Subject ~ 1 + age), data = Orthodont, method = "ML") # using lme4, with uncorrelated slope and intercept m1diag.lme4 <- lmer(distance ~ 1 + Sex + age + age*Sex + (1 + age || Subject), data = Orthodont, REML = FALSE) vt1diag.nlme <- varCompTest(m1diag.nlme,m0.nlme) vt1diag.lme4 <- varCompTest(m1diag.lme4,m0.lme4)

In the previous section, the weights of the chi-bar-square distribution where available explicitly. However, it is not always the case.
By default, since the computation of these weights can be time consuming, the function is computing bounds on the p-value. In many cases this can be enough to decide whether to reject or not the null hypothesis. If more precision is wanted or needed, it is possible to specify it via the option `pval.comp`

, which then needs to be set to either `pval.comp="approx"`

or to `pval.comp="both"`

. In both cases, the `control`

argument can be used to control the computation process. It is a list which contains three slots: `M`

(default to 5000), the size of the Monte Carlo computation, `parallel`

(default to `FALSE`

) to specify whether computation should be parallelized, and `nbcores`

(default to `1`

) to set the number of cores to be used in case of parallel computing.

m0noRE <- lm(distance ~ 1 + Sex + age + age*Sex, data = Orthodont) vt <- varCompTest(m1diag.nlme,m0noRE,pval.comp = "both") vt2 <- varCompTest(m1diag.lme4,m0noRE)

By default, the FIM is extracted from the packages, but it is also possible to compute it via parametric bootstrap. In this case, simply use the option `fim="compute"`

. The default bootstrap sampling size is `B=1000`

but it can be changed.
To get the exact p-value one can use

varCompTest(m1diag.nlme, m0noRE, fim = "compute", pval.comp = "both", control = list(B=100)) varCompTest(m1diag.lme4, m0noRE, fim = "compute", pval.comp = "both", control = list(B=100))

m1 <- glmer(cbind(incidence, size - incidence) ~ period + (1 | herd), family = binomial, data = cbpp) m0 <- glm(cbind(incidence, size - incidence) ~ period, family = binomial, data = cbpp) varCompTest(m1,m0)

# with nlme fm1Theo.nlme <- nlme(conc ~ SSfol(Dose, Time, lKe, lKa, lCl), Theoph, fixed = lKe + lKa + lCl ~ 1, start=c( -2.4, 0.45, -3.2), random = pdSymm(lKa + lCl ~ 1)) fm2Theo.nlme <- nlme(conc ~ SSfol(Dose, Time, lKe, lKa, lCl), Theoph, fixed = lKe + lKa + lCl ~ 1, start=c( -2.4, 0.45, -3.2), random = pdDiag(lCl ~ 1)) varCompTest(fm1Theo.nlme,fm2Theo.nlme) # with lme4 Th.start <- c(lKe = -2.4, lKa = 0.45, lCl = -3.2) nm1 <- nlmer(conc ~ SSfol(Dose , Time ,lKe , lKa , lCl) ~ 0+lKe+lKa+lCl +(lKe+lKa+lCl|Subject), nAGQ=0, Theoph, start = Th.start) nm0 <- nlmer(conc ~ SSfol(Dose , Time ,lKe , lKa , lCl) ~ 0+lKe+lKa+lCl +(lKa+lCl|Subject), nAGQ=0, Theoph, start = Th.start) varCompTest(nm1,nm0)

Testing for the presence of randomness in the model.

fm1 <- nlme(conc ~ SSfol(Dose, Time, lKe, lKa, lCl), Theoph, fixed = lKe + lKa + lCl ~ 1, start=c( -2.4, 0.45, -3.2), random = pdDiag(lKe + lKa + lCl ~ 1)) fm0 <- nls(conc ~ SSfol(Dose, Time, lKe, lKa, lCl), Theoph, start=list(lKe=-2.4,lKa=0.45,lCl=-3.2)) varCompTest(fm1,fm0)

We can see that there is no need to approximate the weights of the chi-bar-square distribution since the bounds on the p-value are sufficient to reject the null hypothesis at any `classical`

level.

**Any scripts or data that you put into this service are public.**

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.