knitr::opts_chunk$set(echo=TRUE)

Linear mixed model

\begin{align} \boldsymbol{y} = X \boldsymbol{\beta} + Z \boldsymbol{u} + \boldsymbol{\epsilon} \end{align}

where:

The fixed effects model the mean of $\boldsymbol{y}$ and the random effects model its variance.

Variance-ratio parametrization

Assumptions about the (co)variances:

Assumptions about the distributions:

As a result:

[ \boldsymbol{y} \, | \, \boldsymbol{\beta}, \boldsymbol{u}, \sigma^2, \phi \sim \mathcal{N}(X \boldsymbol{\beta} + Z \boldsymbol{u}, \sigma^2\, R(\phi)) ]

Let us also introduce $\theta = (\sigma^2, \gamma, \phi)^T$. After integrating out the $\boldsymbol{u}$ (i.e., marginalizing them out):

[ \boldsymbol{y} \, | \, \boldsymbol{\beta}, \theta \sim \mathcal{N}(X \boldsymbol{\beta}, V(\theta)) ]

where: [ var(\boldsymbol{y}) = V(\theta) = \sigma^2 \, H \; \text{ with } \; H := Z G(\gamma) Z^T + R(\phi) ]

Linear-variance parametrization

In variance-component models, i.e., without covariances such as $R = \text{Id}_n$, it is not necessary to have $\sigma^2$ for overall scaling:

Hence:

[ var(\boldsymbol{y}) = V(\theta) = H \; \text{ with } \; H := Z G Z^T + \sigma^2 \text{Id}_n ]

Likelihood

By definition, the likelihood noted $\mathcal{L}$ is the probability density of the data given the parameters but, most importantly, it is a function of the parameters:

[ \mathcal{L}(\boldsymbol{\beta}, \theta) = p(\boldsymbol{y} | \boldsymbol{\beta}, \theta) = (2 \pi)^{-\frac{n}{2}} \, |V(\theta)|^{-\frac{1}{2}} \, \exp \left( - \frac{1}{2} (\boldsymbol{y} - X \boldsymbol{\beta})^T V(\theta)^{-1} (\boldsymbol{y} - X \boldsymbol{\beta}) \right) ]

where $|M|$ denotes the determinant of matrix $M$.

In practice the log-likelihood is maximized as it is easier to compute and delivers the same results, the log being monotonous: [ l(\boldsymbol{\beta}, \theta) = \text{ln} \mathcal{L} = - \frac{1}{2} ( \, n \, \text{ln}(2 \pi) \, + \, \text{ln} |V(\theta)| \, + \, (\boldsymbol{y} - X \boldsymbol{\beta})^T V(\theta)^{-1} (\boldsymbol{y} - X \boldsymbol{\beta}) \, ) ]

This document skips over the estimation of $\theta$ and, considering $\theta$ to be known, it focuses on the estimation of $\boldsymbol{\beta}$ and the prediction of $\boldsymbol{u}$.

Estimation of $\boldsymbol{\beta}$ and prediction of $\boldsymbol{u}$

Maximizing $\mathcal{L}(\boldsymbol{\beta})$

Given that the expectation of $\boldsymbol{y}$ is $X \boldsymbol{\beta}$ and that its variance $V(\theta)$ is considered known here, one can maximize the likelihood to estimate $\boldsymbol{\beta}$, which gives:

[ \boldsymbol{\hat{\beta}} = (X^T V^{-1} X)^{-1} X^T V^{-1} \boldsymbol{y} ]

eventually using a generalized inverse if $X$ is not full rank, i.e., when $\nu < p$.

It is the equivalent of the generalized least squares (GLS).

Nevertheless, this formula requires the inverse of $V$ which is $n \times n$, hence the computational challenge.

Maximizing $f(\boldsymbol{y}, \boldsymbol{u} | \boldsymbol{\beta}, \theta)$

Assuming that $\theta$ is known, Henderson discovered that the best linear unbiased estimates (BLUEs) of $\boldsymbol{\beta}$ and the best linear unbiased predictions (BLUPs) of $\boldsymbol{u}$ can be obtained by maximizing the joint density of $\boldsymbol{y}$ and $\boldsymbol{u}$ (it is not really a likelihood as the random effects are not observed).

\begin{align} \text{ln} \, f(\boldsymbol{y}, \boldsymbol{u} | \boldsymbol{\beta}, \theta) &= \text{ln} \, f(\boldsymbol{y} | \boldsymbol{u}, \boldsymbol{\beta}, \sigma^2, \phi) + \text{ln} \, f(\boldsymbol{u} | \sigma^2, \gamma) \ &= - \frac{1}{2} \left[ \, n \text{ln}(2 \pi) + n \text{ln}(\sigma^2) + \text{ln} |R| + \frac{1}{\sigma^2} (\boldsymbol{y} - X \boldsymbol{\beta} - Z \boldsymbol{u})^T R^{-1} (\boldsymbol{y} - X \boldsymbol{\beta} - Z \boldsymbol{u}) \, \right] \ & \; - \frac{1}{2} \left[ \, q \text{ln}(2 \pi) + q \text{ln}(\sigma^2) + \text{ln} |G| + \frac{1}{\sigma^2} \boldsymbol{u}^T G^{-1} \boldsymbol{u} \right] \end{align}

knowing that $|c \, M| = c^n |M|$ if $M$ is $n \times n$.

It is useful to re-write this by separating the terms depending on $\boldsymbol{\beta}$ and those depending on $\boldsymbol{u}$:

\begin{align} \text{ln} \, f(\boldsymbol{y}, \boldsymbol{u} \, | \, \boldsymbol{\beta}, \theta) &= - \frac{1}{2 \sigma^2} \left[ \, (\boldsymbol{y} - X \boldsymbol{\beta})^T R^{-1} (\boldsymbol{y} - X \boldsymbol{\beta}) \, \right] \ &- \frac{1}{2 \sigma^2} \left[ \, \boldsymbol{u}^T (Z R^{-1} Z^T + G^{-1}) \boldsymbol{u} \, \right] \ &- \frac{1}{2 \sigma^2} \left[ \, - 2 (\boldsymbol{y} - X \boldsymbol{\beta})^T R^{-1} Z \boldsymbol{u} \, \right] \ &- \frac{1}{2} \left[ \, (n + q) \text{ln}(\sigma^2) + \text{ln} |R| + \text{ln} |G| + (n + q) \text{ln}(2 \pi) \right] \end{align}

First derivatives

Then, one computes the first partial derivatives (called the score function when the derived function is a likelihood, hence noted $s$): \begin{align} s(\boldsymbol{\beta}) &:= \frac{\partial \, \text{ln} f(\boldsymbol{y}, \boldsymbol{u} \, | \, \boldsymbol{\beta}, \theta)}{\partial \boldsymbol{\beta}}(\boldsymbol{\beta}) \ &= \frac{\partial }{\partial \boldsymbol{\beta}} \left( -\frac{1}{2 \sigma^2} [(\boldsymbol{y} - X \boldsymbol{\beta})^T R^{-1} (\boldsymbol{y} - X \boldsymbol{\beta}) - 2 (\boldsymbol{y} - X \boldsymbol{\beta})^T R^{-1} Z \boldsymbol{u}] \right) \ &= -\frac{1}{2 \sigma^2} [-2 X^T R^{-1} (\boldsymbol{y} - X \boldsymbol{\beta}) + 2 X^T R^{-1} Z \boldsymbol{u}] \ &= -\frac{1}{\sigma^2} \left[ \, X^T R^{-1} X \boldsymbol{\beta} + X^T R^{-1} Z \boldsymbol{u} - X^T R^{-1} \boldsymbol{y} \, \right] \end{align}

\begin{align} s(\boldsymbol{u}) &:= \frac{\partial \, \text{ln} f(\boldsymbol{y}, \boldsymbol{u} \, | \, \boldsymbol{\beta}, \theta)}{\partial \boldsymbol{u}}(\boldsymbol{u}) \ &= \frac{\partial }{\partial \boldsymbol{u}} \left( -\frac{1}{2 \sigma^2} [\boldsymbol{u}^T (Z R^{-1} Z^T + G^{-1}) \boldsymbol{u} - 2 (\boldsymbol{y} - X \boldsymbol{\beta})^T R^{-1} Z \boldsymbol{u}] \right) \ &= -\frac{1}{2 \sigma^2} [ \, 2 (Z R^{-1} Z^T + G^{-1}) \boldsymbol{u} - 2 (\boldsymbol{y} - X \boldsymbol{\beta})^T R^{-1} Z \, ] \ &= -\frac{1}{\sigma^2} [ \, Z^T R^{-1} X \boldsymbol{\beta} + (Z R^{-1} Z^T + G^{-1}) \boldsymbol{u} - Z^T R^{-1} \boldsymbol{y} \, ] \end{align}

MMEs

By equating these derivatives to zero, the system of equations known as the mixed model equations (MMEs) can be written in matrix form as:

[ \begin{bmatrix} X^T R^{-1} X & X^T R^{-1} Z \ Z^T R^{-1} X & Z^T R^{-1} Z + G^{-1} \end{bmatrix} \begin{bmatrix} \hat{\boldsymbol{\beta}} \ \hat{\boldsymbol{u}} \end{bmatrix} = \begin{bmatrix} X^T R^{-1} \boldsymbol{y} \ Z^T R^{-1} \boldsymbol{y} \end{bmatrix} ]

where the $(p+q) \times (p+q)$ matrix on the left-hand side is named the coefficient matrix, noted $C$.

BLUEs and BLUPs

Developping the MMEs:

Inserting in the first equation the value of $\hat{\boldsymbol{u}}$ from the second, we get the BLUE of $\boldsymbol{\beta}$, noted $\hat{\boldsymbol{\beta}}$:

\begin{align} X^T (R^{-1} - R^{-1} Z (Z^T R^{-1} Z + G^{-1})^{-1} Z^T R^{-1}) X \hat{\boldsymbol{\beta}} &= X^T (R^{-1} - R^{-1} Z (Z^T R^{-1} Z + G^{-1})^{-1} Z^T R^{-1}) \boldsymbol{y} \ X^T (Z^T G Z + R)^{-1} X \hat{\boldsymbol{\beta}} &= X^T (Z^T G Z + R)^{-1} \boldsymbol{y} \ X^T V^{-1} X \hat{\boldsymbol{\beta}} &= X^T V^{-1} \boldsymbol{y} \ \hat{\boldsymbol{\beta}} &= (X^T V^{-1} X)^- X^T V^{-1} \boldsymbol{y} \end{align}

Similarly, we get the BLUP of $\boldsymbol{u}$, noted $\hat{\boldsymbol{u}}$:

\begin{align} \hat{\boldsymbol{u}} &= (Z^T R^{-1} Z + G^{-1})^{-1} Z^T R^{-1} (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \ &= G Z^T V^{-1} (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \end{align}

In pratice, the unknown $\theta$ is replaced by its estimate $\hat{\theta}$: one hence speaks of "empirical" BLUEs and BLUPs, noted eBLUEs and eBLUPs.

Simplification

Still, the formulas above require the inverse of $V$, but it can be shown that:

\begin{align} V \, (R^{-1} - R^{-1} Z C^{22} Z^T R^{-1}) &= (R + Z G Z^T) \, (R^{-1} - R^{-1} Z C^{22} Z^T R^{-1}) \ &= \text{Id} + Z G Z^T R^{-1} - Z C^{22} Z^T R^{-1} - Z G Z^T R^{-1} Z C^{22} Z^T R^{-1} \ &= \text{Id} + Z G Z^T R^{-1} - Z (\text{Id} + G Z^T R^{-1} Z) C^{22} Z^T R^{-1} \ &= \text{Id} + Z G Z^T R^{-1} - Z G (G^{-1} + Z^T R^{-1} Z) C^{22} Z^T R^{-1} \ &= \text{Id} + Z G Z^T R^{-1} - Z G C_{22} C^{22} Z^T R^{-1} \ &= \text{Id} \end{align}

Hence:

[ V^{-1} = R^{-1} - R^{-1} Z C^{22} Z^T R^{-1} ]

Usually, $R^{-1}$ has a structure allowing it to be easily computed. Moreover, $C^{22}$, i.e., the lower-right block matrix of the inverse of $C$, is $q \times q$ with $q \ll n$, making it easier to invert than $V$ which is $n \times n$.

As a result, the formula for the BLUP can be simplified:

\begin{align} \hat{\boldsymbol{u}} &= G Z^T V^{-1} (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \ &= G Z^T (R^{-1} - R^{-1} Z C^{22} Z^T R^{-1}) (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \ &= G (Z^T R^{-1} - Z^T R^{-1} Z C^{22} Z^T R^{-1}) (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \ &= G (\text{Id} - Z^T R^{-1} Z C^{22}) Z^T R^{-1} (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \ &= G (C_{22} - Z^T R^{-1} Z) C^{22} Z^T R^{-1} (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \ &= G (Z^T R^{-1} Z + G^{-1} - Z^T R^{-1} Z) C^{22} Z^T R^{-1} (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \ &= (G Z^T R^{-1} Z + \text{Id} - G Z^T R^{-1} Z) C^{22} Z^T R^{-1} (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \ &= C^{22} Z^T R^{-1} (\boldsymbol{y} - X \hat{\boldsymbol{\beta}}) \end{align}

TODO: simplify also the BLUEs of $\beta$

Inverse of $C$

The coefficient matrix $C$ of the MMEs is a block matrix, i.e., it is partitioned into four submatrices $C_{11}$, $C_{12}$, $C_{21}$ and $C_{22}$:

[ C := \begin{bmatrix} C_{11} & C_{12} \ C_{21} & C_{22} \end{bmatrix} := \begin{bmatrix} X^T R^{-1} X & X^T R^{-1} Z \ Z^T R^{-1} X & Z^T R^{-1} Z + G^{-1} \end{bmatrix} ]

As such, it can be inverted blockwise:

[ C^{-1} := \begin{bmatrix} C^{11} & C^{12} \ C^{21} & C^{22} \end{bmatrix} ]

Technically, as it is easier to invert matrices that are lower or upper triangular, a LDU decomposition is first applied to $C$. Such a decomposition involves the Schur complement of its diagonal blocks, which are then inverted:

where $C / C_{ii}$ is the Schur complement of the block $C_{ii}$ of the matrix $C$.

Developping the first equation above gives:

\begin{align} C^{11} &= [ C_{11} - C_{12} C_{22}^{-1} C_{21} ]^{-1} \ &= [ X^T R^{-1} X - X^T R^{-1} Z (Z^T R^{-1} Z + G^{-1})^{-1} Z^T R^{-1} X ]^{-1} \ &= [ X^T ( R^{-1} - R^{-1} Z (Z^T R^{-1} Z + G^{-1})^{-1} Z^T R^{-1} ) X ]^{-1} \ &= (X^T V^{-1} X)^{-1} \end{align}

Developping the second equation above gives:

\begin{align} C^{22} &= [ C_{22} - C_{21} C_{11}^{-1} C_{12} ]^{-1} \ &= [ Z^T R^{-1} Z + G^{-1} - Z^T R^{-1} X (X^T R^{-1} X)^{-1} X^T R^{-1} Z ]^{-1} \ \end{align}

Estimation and prediction variances

$Var(\hat{\boldsymbol{\beta}})$

When $A$ is a constant matrix and $x$ a random vector, it is known that $Var(A x) = A Var(x) A^T$ .

Hence:

\begin{align} Var(\hat{\boldsymbol{\beta}}) &= Var((X^T V^{-1} X)^{-1} X^T V^{-1} \boldsymbol{y}) \ &= (X^T V^{-1} X)^{-1} X^T V^{-1} \; V \; V^{-1} X (X^T V^{-1} X)^- \ &= (X^T V^{-1} X)^{-1} X^T V^{-1} X (X^T V^{-1} X)^- \ &= (X^T V^{-1} X)^{-1} \ &= C^{11} \end{align}

$Var(\hat{\boldsymbol{u}})$

TODO

\begin{align} Var(\hat{\boldsymbol{u}}) &= ... \ &= ... \ &= G - C^{22} \end{align}

Moreover:

\begin{align} Cov(\hat{\boldsymbol{u}}, \boldsymbol{u}) &= ... \ &= ... \ &= Var(\hat{\boldsymbol{u}}) \end{align}

$Var(\hat{\boldsymbol{u}} - \boldsymbol{u})$

\begin{align} Var(\hat{\boldsymbol{u}} - \boldsymbol{u}) &= Var(\hat{\boldsymbol{u}}) - 2 Cov(\hat{\boldsymbol{u}}, \boldsymbol{u}) + Var(\boldsymbol{u}) \ &= G - C^{22} - 2 (G - C^{22}) + G \ &= C^{22} \end{align}

Implementation

Version 1

Make the building blocks

Make the elements

makeMmeElements <- function(y, X, Z){
  if(is.vector(y))
    y <- matrix(y, nrow=length(y), ncol=1)
  stopifnot(is.matrix(y),
            is.matrix(X),
            is.matrix(Z),
            ncol(y) == 1,
            nrow(y) == nrow(X),
            nrow(X) == nrow(Z))

  ## for the left-hand side
  tX.X <- crossprod(X, X)
  tZ.X <- crossprod(Z, X)
  tX.Z <- crossprod(X, Z)
  tZ.Z <- crossprod(Z, Z)

  ## for the right-hand side
  tX.y <- crossprod(X, y)
  tZ.y <- crossprod(Z, y)

  return(list(tX.X=tX.X, tZ.X=tZ.X, tX.Z=tX.Z, tZ.Z=tZ.Z,
              tX.y=tX.y, tZ.y=tZ.y))
}

Make the right-hand side

makeMmeRhs <- function(tX.y, tZ.y){
  stopifnot(is.matrix(tX.y),
            ncol(tX.y) == 1,
            is.matrix(tZ.y),
            ncol(tZ.y) == 1)

  rhs <- c(tX.y, tZ.y)

  return(rhs)
}

Make the MME left-hand side

makeMmeLhs <- function(tX.X, tZ.X, tX.Z, tZ.Z, lambda, Ainv){
  stopifnot(is.matrix(tX.X),
            is.matrix(tZ.X),
            is.matrix(tX.Z),
            is.matrix(tZ.Z),
            nrow(tX.X) == nrow(tX.Z),
            nrow(tZ.Z) == nrow(tZ.X),
            is.numeric(lambda),
            length(lambda) == 1,
            is.matrix(Ainv))

  p <- nrow(tX.X)
  q <- nrow(tZ.Z)
  lhs <- matrix(data=NA, nrow=p+q, ncol=p+q)

  lhs[1:p, 1:p] <- tX.X
  lhs[(p+1):(p+q), 1:p] <- tZ.X
  lhs[1:p, (p+1):(p+q)] <- tX.Z
  lhs[(p+1):(p+q), (p+1):(p+q)] <- tZ.Z + lambda * Ainv

  return(lhs)
}

Solve

solveMme_v1 <- function(y, X, Z, sigma.u2, Ainv, sigma2){
  if(is.vector(y))
    y <- matrix(y, nrow=length(y), ncol=1)
  stopifnot(is.matrix(y),
            is.matrix(X),
            is.matrix(Z),
            is.matrix(Ainv),
            ncol(y) == 1,
            nrow(y) == nrow(X),
            nrow(X) == nrow(Z),
            nrow(Ainv) == ncol(Z))

  elems <- makeMmeElements(y=y, X=X, Z=Z)

  rhs <- makeMmeRhs(tX.y=elems$tX.y, tZ.y=elems$tZ.y)

  lhs <- makeMmeLhs(tX.X=elems$tX.X, tZ.X=elems$tZ.X,
                    tX.Z=elems$tX.Z, tZ.Z=elems$tZ.Z,
                    lambda=sigma2 / sigma.u2, Ainv=Ainv)

  theta.hat <- solve(lhs, rhs) # faster than solve(lhs) %*% rhs

  return(as.vector(theta.hat))
}

Version 2

solveMme_v2 <- function(y, X, Z, sigma.u2, Ginv, Rinv){
  if(is.vector(y))
    y <- matrix(y, nrow=length(y), ncol=1)
  stopifnot(is.matrix(y),
            is.matrix(X),
            is.matrix(Z),
            is.matrix(Ginv),
            is.matrix(Rinv),
            ncol(y) == 1,
            nrow(y) == nrow(X),
            nrow(X) == nrow(Z),
            nrow(Ginv) == ncol(Ginv),
            nrow(Ginv) == ncol(Z),
            nrow(Rinv) == ncol(Rinv),
            nrow(Rinv) == nrow(y))

  tX <- t(X)
  tZ <- t(Z)

  W <- solve(tZ %*% Rinv %*% Z + Ginv)

  if(FALSE){
    H <- Z %*% G %*% tZ + R
    V <- sigma2 * H
    Vinv <- solve(V)
  }
  Vinv <- Rinv - Rinv %*% Z %*% W %*% tZ %*% Rinv
  BLUE_beta <- solve(tX %*% Vinv %*% X, tX %*% Vinv %*% y)

  BLUP_u <- W %*% tZ %*% Rinv %*% (y - X %*% BLUE_beta)

  out <- c(BLUE_beta[,1], BLUP_u[,1])
  return(out)
}
knitr::knit_exit()

Example

suppressWarnings(suppressPackageStartupMessages(library(rutilstimflutre)))

Simulate data

set.seed(12345)

SNP genotypes

Coalescent

nbGenos <- 300
nbChroms <- 10
Ne <- 10^4
chromLen <- 10^5
mutRate <- 10^(-8)
recRate <- 10^(-8)
nbPops <- 10
migRate <- 50 # between 0.5 and 100
genomes <- simulCoalescent(nb.inds=nbGenos,
                           nb.reps=nbChroms,
                           pop.mut.rate=4 * Ne * mutRate * chromLen,
                           pop.recomb.rate=4 * Ne * recRate * chromLen,
                           chrom.len=chromLen,
                           nb.pops=nbPops,
                           mig.rate=migRate)
M <- genomes$genos
dim(M)
M[1:3, 1:5]
nbSnps <- ncol(M)
levGenos <- rownames(M)

Allele frequencies

afs <- estimSnpAf(M)
plotHistAllelFreq(afs=afs)
mafs <- estimSnpMaf(M)
plotHistMinAllelFreq(mafs=mafs)

Genetic relationships

A_vr <- estimGenRel(M, afs, relationships="additive", method="vanraden1")
hist(diag(A_vr), # 1 expected under HWE
     breaks="FD", main="Inbreeding coefficients")
hist(A_vr[upper.tri(A_vr)], # 0 expected under HWE
     main="Additive genetic relationships")
imageWithScale(A_vr,
               main="Additive genetic relationships")

Linkage disequilibrium

TODO

Phenotypes

Each genotype is phenotyped a given number of times.

nbBlocks <- 4
levBlocks <- LETTERS[1:nbBlocks]
nbDat <- nbGenos * nbBlocks
dat <- data.frame(block=rep(levBlocks, each=nbGenos),
                  geno=levGenos,
                  stringsAsFactors=TRUE)
str(dat)
head(dat)

Design effects

X <- model.matrix(~ 1 + block, data=dat)
mu <- 50
effBlocks <- rnorm(n=nbBlocks - 1, mean=0, sd=10)
beta <- c(mu, effBlocks)
nbFixEffs <- length(beta)

SNP substitution effects

sigma_alpha2 <- 10^(-3)
alpha <- rnorm(n=ncol(M), mean=0, sd=sqrt(sigma_alpha2))
names(alpha) <- colnames(X)

Genotypic values

tmp <- matrix(rep(1, nbGenos)) %*% (2 * afs)
Z_u <- M - tmp
u <- Z_u %*% alpha
Z <- model.matrix(~ 0 + geno, data=dat)
## image(t(Z)[, nrow(Z):1])
nbRndEffs <- length(u)
(sigma_u2 <- sigma_alpha2 * 2 * sum(afs * (1 - afs)))

Errors

h2 <- 0.7
(sigma2 <- ((1 - h2) / h2) * sigma_u2)
epsilon <- rnorm(n=nbDat, mean=0, sd=sqrt(sigma2))

Phenotypic values

y <- X %*% beta + Z %*% u + epsilon

Compute BLUEs and BLUPs

(n <- nrow(y))
(p <- ncol(X))
(q <- ncol(Z))

Version 1

system.time(
    out <- solveMme_v1(y=y, X=X, Z=Z, sigma.u2=sigma_u2, Ainv=solve(A_vr), sigma2))
(checkFixEffs <- cbind(truth=beta,
                       BLUE=out[1:p]))
checkRndEffs <- cbind(truth=u[,1],
                      BLUP=out[(p+1):(p+q)])
cor(checkRndEffs[,"truth"], checkRndEffs[,"BLUP"])

Version 2

system.time(
    out <- solveMme_v2(y=y, X=X, Z=Z, sigma.u2=sigma_u2, Ginv=solve(A_vr), Rinv=diag(nrow(y))))
(checkFixEffs <- cbind(truth=beta,
                       BLUE=out[1:p]))
checkRndEffs <- cbind(truth=u[,1],
                      BLUP=out[(p+1):(p+q)])
cor(checkRndEffs[,"truth"], checkRndEffs[,"BLUP"])

References



timflutre/rutilstimflutre documentation built on Feb. 7, 2024, 8:17 a.m.