# getResidualCov.gllvm: Extract residual covariance matrix from gllvm object In gllvm: Generalized Linear Latent Variable Models

## Description

Calculates the residual covariance matrix for gllvm model.

## Usage

 ```1 2``` ```## S3 method for class 'gllvm' getResidualCov(object, adjust = 1, site.index = NULL, ...) ```

## Arguments

 `object` an object of class 'gllvm'. `adjust` The type of adjustment used for negative binomial, binomial and normal distribution when computing residual correlation matrix. Options are 0 (no adjustment), 1 (the default adjustment) and 2 (alternative adjustment for NB distribution), see details. `site.index` A site index, vector of length one or 1, that is used in the calculation of a GLLVM with quadratic response model. `...` not used.

## Details

Residual covariance matrix, storing information on species co-occurrence that is not explained by the environmental variables (if included), is calculated using the matrix of latent variables loadings, that is, ΘΘ', and the dispersion parameter related to the distribution of choice, is applicable (e.g. in the case of negative-binomial distributed responses).

When the responses are modelled using the negative binomial distribution, the residual variances for each species must be adjusted for overdispersion. The two possible adjustement terms are log(φ_j + 1) (`adjust = 1`) and ψ^{(1)}(1/φ_j) (`adjust = 2`), where ψ^{(1)} is the trigamma function.

The negative binomial model can be written using different parametrizations. The residual covariance with `adjust = 1` can be obtained using the lognormal-Poisson parametrization, that is,

Y_{ij} \sim Poisson(μ_{ij} λ_j),

where λ_j \sim lognormal(-σ^2/2, σ^2) and σ^2 = log(φ_j + 1) and log(μ_{ij}) = η_{ij}. Now E[Y_{ij}] = μ_{ij} and variance V(μ_{ij}) = μ_{ij} + μ_{ij}^2 (exp(σ^2) - 1) = μ_{ij} + μ_{ij}^2 φ_j, which are the same as for the NB distribution. Therefore, on linear predictor scale, we have the variance

V(log(μ_{ij} λ_j)) = V(logμ_{ij}) + V(logλ_j) = V(u_i'θ_j) + σ^2 = θ_j'θ_j + log(φ_j + 1).

which leads to the residual covariance matrix Θ Θ' + Ψ, where Ψ is the diagonal matrix with log(φ_j + 1) as diagonal elements (`adjust = 1`).

Or, for a GLLVM where species are a quadratic function of the latent variables, we instead have

V(log(μ_{ij} λ_j)) = V(logμ_{ij}) + V(logλ_j) = V(u_i'θ_j-u_i' D_j u_i) + σ^2

= θ_j'θ_j + 2diag(D_j)'diag(D_j)log(φ_j + 1).

which leads to the residual covariance matrix Θ Θ' + 2 Γ_j Γ_j' + diag(Φ), where Γ_j holds the quadratic coefficients. Since the quadratic coefficients are constrained to be positive, the residual covariance in the latter case is, given the same coefficients on the linear term, equal or more positive than in the linear case.

The residual covariance matrix with `adjust = 2` can be obtained by using Poisson-Gamma parametri-zation

Y_{ij} \sim Poisson(μ_{ij} λ_j),

where λ_j \sim Gamma(1/φ_j, 1/φ_j) and μ_{ij} is as above. The mean and the variance are of similar form as above and we have that

V(log(μ_{ij} λ_j)) = V(logμ_{ij}) + V(logλ_j) = θ_j'θ_j + ψ^{(1)}(1/φ_j),

where ψ^{(1)} is the trigamma function.

In the case of binomial distribution, the adjustment terms (`adjust = 1`) are 1 for probit link and π^2/3 for logit link. These are obtained by treating binomial model as latent variable model. Assume

Y^*_{ij} = η_{ij} + e_{ij},

where e_{ij} \sim N(0, 1) for probit model, and e_{ij} \sim logistic(0, 1) for logit model. Then binary response is defined as Y_{ij} = 1, if Y^*_{ij} > 0 and 0 otherwise. Now we have that μ_{ij} = P(Y_{ij} = 1) = P(Y^*_{ij} > 0) = P(η_{ij} > -e_{ij}) = P(e_{ij} <= η_{ij}) which leads to probit and logit models. On linear predictor scale we then have that

V(η_{ij} + e_{ij}) = V(η_{ij}) + V(e_{ij}).

For the probit model, the residual covariance matrix is then ΘΘ' + I_m, and for the logit model ΘΘ' + π^2/3 I_m. Similarly as above, for a GLLVM where species are a quadratic function of the latent variables, the term 2Γ_jΓ_j' is added to the residual covariance matrix.

For normal distribution, we can write

Y_{ij} = η_{ij} + e_{ij},

where e_{ij} \sim N(0, φ_j^2) and thus we have that

V(η_{ij} + e_{ij}) = V(η_{ij}) + V(e_{ij}).

For the gaussian model, the residual covariance matrix is then ΘΘ' + diag(Φ^2).

## Value

Function returns following components:

 `cov ` residual covariance matrix `trace ` trace of the residual covariance matrix, the total variance explained `var.q ` trace of the residual covariance matrix per latent variable, variance explained per latent variable `var.q2 ` trace of the squared term of the residual covariance matrix per latent variable, for quadratic responses. Variance explained per latent variable by the quadratic term

## Author(s)

Francis K.C. Hui, Jenni Niku, David I. Warton, Bert van der Veen

## Examples

 ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15``` ```## Not run: # Load a dataset from the mvabund package data(antTraits) y <- as.matrix(antTraits\$abund) # Fit gllvm model fit <- gllvm(y = y, family = poisson()) # residual covariance: rescov <- getResidualCov(fit) rescov\$cov # Trace of the covariance matrix rescov\$trace # Variance explained per latent variable rescov\$var.q ## End(Not run) ```

gllvm documentation built on July 29, 2021, 1:06 a.m.