Preamble

License: CC BY-SA 4.0

Motivation

A large part of any scientific activity is about measuring things, in other words collecting data, and it is not infrequent to collect heterogeneous data. It seems therefore natural to say that the samples come from a mixture of clusters. The aim is thus to recover from the data, i.e. to infer, (i) how many clusters there are, (ii) what are the features of these clusters, and (iii) from which cluster each statistical unit comes from. In this document, we focus on points (ii) and (iii).

(Caution, this is my own quick-and-dirty tutorial, see the references at the end for presentations by professional statisticians.)

Data

The data, generically noted $\mathcal{D}$, consists here in $N$ observations, gathered into the vector $\boldsymbol{x}$:

[ \mathcal{D} = {x_1, x_2, \ldots, x_N } ]

In this document, we suppose that each observation $x_i$ is univariate, i.e. a scalar.

Assumption

Let us assume that the data are heterogeneous and that they can be partitioned into $K$ clusters (in this document, we suppose that $K$ is known). This means that we expect a subset of the observations to come from cluster $k = 1$, another subset from cluster $k = 2$, and so on.

Statistical model

Writing down the statistical model usually means starting by writing down the likelihood of the parameters. Technically, we say that the observations were generated according to a density function $f$. In this document, we will assume that this density is itself a mixture of densities, one per cluster. Furthermore, observations from cluster $k$ are generated from a Normal distribution, $\mathcal{N}$, which density is here noted $\phi$, with mean $\mu_k$ and standard deviation $\sigma_k$. Moreover, as we don't know for sure from which cluster a given observation comes from, we define the mixture weight $w_k$ (also called mixing proportion) to be the probability that any given observation comes from cluster $k$.

As a result, we have the following list of parameters: $\Theta = {w_1,\ldots,w_K,\mu_1,\ldots,\mu_K,\sigma_1,\ldots,\sigma_K }$.

Finally, we can write the likelihood, assuming that the observations are conditionally independent (conditionally on the parameters):

\begin{align} \mathcal{L}(\Theta; \mathcal{D}) &= f(\mathcal{D} | \Theta) \ &= \prod_{i=1}^N f(x_i | \Theta) \ &= \prod_{i=1}^N \sum_{k=1}^{K} w_k \, \phi(x_i|\mu_k,\sigma_k) \ &= \prod_{i=1}^N \sum_{k=1}^{K} w_k \frac{1}{\sigma_k \sqrt{2\pi}} \exp \left( -\frac{1}{2} \left( \frac{x_i - \mu_k}{\sigma_k} \right) ^2 \right) \end{align}

The constraints are: $\forall k, w_k \ge 0$ and $\sum_{k=1}^K w_k = 1$.

Maximum likelihood

Naturally, we can start by maximizing the likelihood in order to estimate the parameters:

$L(\theta) = P(X|\theta) = \prod_{i=1}^N f(x_i|\theta) = \prod_{i=1}^N \sum_{k=1}^K w_k \phi(x_i;\theta_k)$

Note that, to simply calculate this likelihood, we need to calculate $K^N$ terms, which is quickly too costly.

As usual, it's easier to deal with the log-likelihood:

$l(\theta) = \sum_{i=1}^N ln \left( \sum_{k=1}^K w_k \phi(x_i; \theta_k) \right)$

Let's take the derivative with respect to one parameter, eg. $\theta_l$:

$\frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{1}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} w_l \frac{\partial \phi(x_i; \theta_l)}{\partial \theta_l}$

$\frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{w_l \phi(x_i; \theta_l)}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} \frac{1}{\phi(x_i; \theta_l)} \frac{\partial \phi(x_i; \theta_l)}{\partial \theta_l}$

$\frac{\partial l}{\partial \theta_l} = \sum_{i=1}^N \frac{w_l \phi(x_i; \theta_l)}{\sum_{k=1}^K w_k \phi(x_i; \theta_k)} \frac{\partial ln ( \phi(x_i; \theta_l) )}{\partial \theta_l}$

This shows that maximizing the likelihood of a mixture model is like doing a weighted likelihood maximization. However, these weights depend on the parameters we want to estimate! That's why we now switch to the missing-data formulation of the mixture model.

Missing data

We introduce the following N latent variables $Z_1,...,Z_i,...,Z_N$ (also called hidden or allocation variables), one for each observation, such that $Z_i=k$ means that observation $x_i$ belongs to cluster $k$. Thanks to this, we can reinterpret the mixture weights: $\forall i, P(Z_i=k|\theta)=w_k$. In fact, it is much easier to do the maths when defining each $Z_i$ as a vector of length $K$, with $Z_{ik}=1$ if observation $x_i$ belongs to cluster $k$, and $Z_{ik}=0$ otherwise (indicator variables). Moreover, we can now define the membership probabilities, one for each observation:

$p(k|i) = P(Z_{ik}=1|x_i,\theta) = \frac{P(Z_{ik}=1 | \theta) p(x_i | Z_{ik}=1,\theta)}{p(x_i | \theta)} = \frac{w_k \phi(x_i|\mu_k,\sigma_k)}{\sum_{l=1}^K w_l \phi(x_i|\mu_l,\sigma_l)}$

The observed-data likelihood (also called sometimes "incomplete" or "marginal", even though these appellations are misnomers) is still written the same way:

$L_{obs}(\theta) = P(X|\theta) = \prod_{i=1}^N f(x_i|\theta)$

But now we can also write the augmented-data likelihood (also called sometimes "complete"), assuming all observations are independent conditionally on their membership:

$L_{aug}(\theta) = P(X,Z|\theta) = \prod_{i=1}^N P(x_i|Z_i,\theta) P(Z_i|\theta) = \prod_{i=1}^N \left( \prod_{k=1}^K \phi(x_i|\mu_k,\sigma_k)^{Z_{ik}} w_k^{Z_{ik}} \right)$.

Note how easy it is to write it thanks to the fact that we chose to use $Z_{ik} \in {0,1}$ compare to $Z_i=k$.

And here is the augmented-data log-likelihood (useful in the M step of the EM algorithm, see below):

$l_{aug}(\theta) = \sum_{i=1}^N \left( \sum_{k=1}^K Z_{ik} ln(\phi(x_i|\mu_k,\sigma_k)) + \sum_{k=1}^K Z_{ik} ln(w_k) \right)$

Using the plate notation, the Gaussian mixture model described here can be represented like this.

EM algorithm

Definition

We first define an objective function, $Q$, which happens to be the conditional expectation of the augmented-data log-likelihood function, $l_{aug}$, over the latent variables, $Z$, given the observed data, $X$, and the parameter estimates, $\theta$. The idea is to iterate two steps, starting from randomly-initialized parameters. In the E-step, one does an expectation, that is one computes this objective function to determine the membership probabilities. And in the M-step, one maximizes this objective function to determine the next iterate of the parameter estimates. In equations, it can be written like this: E step: $Q(\theta|X,\theta^{(t)}) = \mathbb{E}{Z|X,\theta^{(t)}} \left[ ln(P(X,Z|\theta))|X,\theta^{(t)} \right] = \int q(Z|X,\theta^{(t)}) \; l{aug} \; dZ$ M-step: $\theta^{(t+1)} = argmax_{\theta} Q(\theta|X,\theta^{(t)})$ so that $\forall \theta \in \Theta, Q(\theta^{(t+1)}|X,\theta^{(t)}) \ge Q(\theta|X,\theta^{(t)})$

Theory

Stated like this above doesn't necessarily allow oneself to understand it immediately, at least in my case. Hopefully, Matthew Beal presents it in a great and simple way in his PhD thesis (see references at the bottom of the page).

Here is the observed-data log-likelihood:

$l_{obs}(\theta) = \sum_{i=1}^N ln \left( f(x_i|\theta) \right)$

First we introduce the hidden variables by integrating them out:

$l_{obs}(\theta) = \sum_{i=1}^N ln \left( \int p(x_i,z_i|\theta) dz_i \right)$

Then, we use any probability distribution $q$ on these hidden variables (in fact, we use a distinct distribution $q_{z_i}$ for each observation):

$l_{obs}(\theta) = \sum_{i=1}^N ln \left( \int q_{z_i}(z_i) \frac{p(x_i,z_i|\theta)}{q_{z_i}(z_i)} dz_i \right)$

And here is the great trick, as explained by Beal: "any probability distribution over the hidden variables gives rise to a lower bound on $l_{obs}$". This is due to to the Jensen inequality (the logarithm is concave):

$l_{obs}(\theta) \ge \sum_{i=1}^N \int q_{z_i}(z_i) ln \left( \frac{p(x_i,z_i|\theta)}{q_{z_i}(z_i)} \right) dz_i = \mathcal{F}(q_{z_1}(z_1), ..., q_{z_N}(z_N), \theta)$

At each iteration, the E step maximizes the lower bound ($\mathcal{F}$) with respect to the $q_{z_i}(z_i)$: E step: $q^{(t+1)}{z_i} \leftarrow argmax{q_{z_i}} \mathcal{F}(q_z(z), \theta^{(t)}) \forall i$ M step: $\theta^{(t+1)} \leftarrow argmax_\theta \mathcal{F}(q^{(t+1)}_z(z), \theta)$

The E-step amounts to inferring the posterior distribution of the hidden variables $q^{(t+1)}_{z_i}$ given the current parameter $\theta^{(t)}$:

$q^{(t+1)}_{z_i}(z_i) = p(z_i | x_i, \theta^{(t)})$

Indeed, the $q^{(t+1)}_{z_i}(z_i)$ make the bound tight (the inequality becomes an equality):

$\mathcal{F}(q^{(t+1)}z(z), \theta^{(t)}) = \sum{i=1}^N \int q^{(t+1)}{z_i}(z_i) ln \left( \frac{p(x_i,z_i|\theta^{(t)})}{q^{(t+1)}{z_i}(z_i)} \right) dz_i$

$\mathcal{F}(q^{(t+1)}z(z), \theta^{(t)}) = \sum{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( \frac{p(x_i,z_i|\theta^{(t)})}{p(z_i | x_i, \theta^{(t)})} \right) dz_i$

$\mathcal{F}(q^{(t+1)}z(z), \theta^{(t)}) = \sum{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( \frac{p(x_i|\theta^{(t)}) p(z_i|x_i,\theta^{(t)})}{p(z_i | x_i, \theta^{(t)})} \right) dz_i$

$\mathcal{F}(q^{(t+1)}z(z), \theta^{(t)}) = \sum{i=1}^N \int p(z_i | x_i, \theta^{(t)}) ln \left( p(x_i|\theta^{(t)}) \right) dz_i$

$\mathcal{F}(q^{(t+1)}z(z), \theta^{(t)}) = \sum{i=1}^N ln \left( p(x_i|\theta^{(t)}) \right)$

$\mathcal{F}(q^{(t+1)}z(z), \theta^{(t)}) = l{obs}(\theta^{(t)})$

Then, at the M step, we use these statistics to maximize the new lower bound $\mathcal{F}$ with respect to $\theta$, and therefore find $\theta^{(t+1)}$.

Variational approximation

If the posterior distributions $p(z_i|x_i,\theta)$ are intractable, we can use a variational approach to constrain them to be of a particular, tractable form. In the E step, maximizing $\mathcal{F}$ with respect to $q_{z_i}$ is equivalent to minimizing the Kullback-Leibler divergence between the variational distribution $q(z_i)$ and the exact hidden variable posterior $p(z_i|x_i,\theta)$:

$KL[q_{z_i}(z_i) || p(z_i|x_i,\theta)] = \int q_{z_i}(z_i) ln \left( \frac{q_{z_i}(z_i)}{p(z_i|x_i,\theta)} \right)$

As a result, the E step may not always lead to a tight bound.

Formulas for both steps

In both steps we need to use $Q$, whether to evaluate it or maximize it.

$Q(\theta|X,\theta^{(t)}) = \mathbb{E}_{Z|X,\theta^{(t)}} \left[ ln(P(X,Z|\theta))|X,\theta^{(t)} \right]$

$Q(\theta|X,\theta^{(t)}) = \mathbb{E}{Z|X,\theta^{(t)}} \left[ l{aug}(\theta)|X,\theta^{(t)} \right]$

$Q(\theta|X,\theta^{(t)}) = \sum_{i=1}^N \left( \sum_{k=1}^K \mathbb{E}{Z|X,\theta^{(t)}}[Z{ik}|x_i,\theta_k^{(t)}] \; ln(\phi(x_i|\mu_k,\sigma_k)) + \sum_{k=1}^K \mathbb{E}{Z|X,\theta^{(t)}}[Z{ik}|x_i,\theta_k^{(t)}] \; ln(w_k) \right)$

Formulas of the E step

As indicated above, the E step consists in evaluating $Q$, i.e. simply evaluating the conditional expectation over the latent variables of the augmented-data log-likelihood given the observed data and the current estimates of the parameters.

$\mathbb{E}{Z|X,\theta^{(t)}}[Z{ik}|x_i,\theta_k^{(t)}] = 1 \times P(Z_{ik}=1|x_i,\theta_k^{(t)}) + 0 \times P(Z_{ik}=0|x_i,\theta_k^{(t)}) = P(Z_{ik}=1|x_i,\theta_k^{(t)}) = \frac{w_k^{(t)} \phi(x_i|\mu_k^{(t)},\sigma_k^{(t)})}{\sum_{l=1}^K w_l^{(t)} \phi(x_i|\mu_l^{(t)},\sigma_l^{(t)})} = p(k|i)$

Note how the conditional expectation over $Z_{ik}$ simply happens to be the posterior of $Z_{ik}=1$, which of course corresponds to the membership probability.

Formulas of the M step

n this step, we need to maximize $Q$ (also written $\mathcal{F}$ above), w.r.t. each $\theta_k$. A few important rules are required to write down the analytical formulas of the MLEs, but only from a high-school level (see here).

$\Lambda(w_k,\lambda) = \sum_{i=1}^N \left( \sum_{k=1}^K p(k|i) ln(w_k) \right) + \lambda (1 - \sum_{k=1}^K w_k)$

As usual, to find the maximum, we derive and equal to zero:

$\frac{\partial \Lambda}{\partial w_k}(w_k) = \sum_{i=1}^N \left( p(k|i) \frac{1}{w_k} \right) - \lambda$

$\frac{\partial \Lambda}{\partial w_k}(\hat{w}_k^{(t+1)}) = 0$

$\hat{w}k^{(t+1)} = \frac{1}{\lambda} \sum{i=1}^N p(k|i)$

Now, to find the multiplier, we go back to the constraint:

$\sum_{k=1}^K \hat{w}k^{(t+1)} = 1 \rightarrow \lambda = \sum{i=1}^N \sum_{k=1}^K p(k|i) = N$

Finally:

$\hat{w}k^{(t+1)} = \frac{1}{N} \sum{i=1}^N p(k|i)$

$\frac{\partial Q}{\partial \mu_k} = \sum_{i=1}^N p(k|i) \frac{\partial ln(\phi(x_i|\mu_k,\sigma_k))}{\partial \mu_k}$

$\frac{\partial Q}{\partial \mu_k} = \sum_{i=1}^N p(k|i) \frac{1}{\phi(x_i|\mu_k,\sigma_k)} \frac{\partial \phi(x_i|\mu_k,\sigma_k)}{\partial \mu_k}$

$\frac{\partial Q}{\partial \mu_k} = 0 = \sum_{i=1}^N p(k|i) (x_i - \hat{\mu}_k^{(t+1)})$

Finally:

$\hat{\mu}k^{(t+1)} = \frac{\sum{i=1}^N p(k/i) x_i}{\sum_{i=1}^N p(k/i)}$

$\frac{\partial Q}{\partial \sigma_k} = \sum_{i=1}^N p(k/i) (\frac{-1}{\sigma_k} + \frac{(x_i - \mu_k)^2}{\sigma_k^3})$

$\hat{\sigma}k^{(t+1)} = \sqrt{\frac{\sum{i=1}^N p(k/i) (x_i - \hat{\mu}k^{(t+1)})^2}{\sum{i=1}^N p(k/i)}}$

$w_k = \frac{e^{\gamma_k}}{\sum_{k=1}^K e^{\gamma_k}}$

$\frac{\partial w_k}{\partial \gamma_j} = \begin{cases} w_k - w_k^2 & \mbox{if }j = k \ -w_kw_j & \mbox{otherwise} \end{cases}$

$\frac{\partial l(\theta)}{\partial w_k} = \sum_{i=1}^N (p(k/i) - w_k)$

Finally:

$\hat{w}k = \frac{1}{N} \sum{i=1}^N p(k/i)$

Simulation

simulUnivData <- function(K=2, N=100, gap=6){
  mus <- seq(0, gap*(K-1), gap)
  sigmas <- runif(n=K, min=0.5, max=1.5)
  tmp <- floor(rnorm(n=K-1, mean=floor(N/K), sd=5))
  ns <- c(tmp, N - sum(tmp))
  clusters <- as.factor(matrix(unlist(lapply(1:K, function(k){rep(k, ns[k])})),
                               ncol=1))
  obs <- matrix(unlist(lapply(1:K, function(k){
    rnorm(n=ns[k], mean=mus[k], sd=sigmas[k])
  })))
  new.order <- sample(1:N, N)
  obs <- obs[new.order]
  rownames(obs) <- NULL
  clusters <- clusters[new.order]
  return(list(obs=obs, clusters=clusters, mus=mus, sigmas=sigmas,
              mix.weights=ns/N))
}

Inference

E step

Estep <- function(data, params){
  calcMembershipProbas(data, params$mus, params$sigmas, params$mix.weights)
}

calcMembershipProbas <- function(data, mus, sigmas, mix.weights){
  N <- length(data)
  K <- length(mus)
  tmp <- matrix(unlist(lapply(1:N, function(i){
    x <- data[i]
    norm.const <- sum(unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * calcUnivNormDens(x, mu, sigma)}, mus, sigmas, mix.weights)))
    unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * calcUnivNormDens(x, mu, sigma) / norm.const
    }, mus[-K], sigmas[-K], mix.weights[-K]))
  })), ncol=K-1, byrow=TRUE)
  membership.probas <- cbind(tmp, apply(tmp, 1, function(x){1 - sum(x)}))
  names(membership.probas) <- NULL
  return(membership.probas)
}

calcUnivNormDens <- function(x, mu, sigma){
  return( 1/(sigma * sqrt(2*pi)) * exp(-1/(2*sigma^2)*(x-mu)^2) )
}

M step

Mstep <- function(data, params, membership.probas){
  params.new <- list()
  sum.membership.probas <- apply(membership.probas, 2, sum)
  params.new$mus <- calcMlEstimMeans(data, membership.probas,
                                     sum.membership.probas)
  params.new$sigmas <- calcMlEstimStdDevs(data, params.new$mus,
                                          membership.probas,
                                          sum.membership.probas)
  params.new$mix.weights <- calcMlEstimMixWeights(data, membership.probas,
                                                  sum.membership.probas)
  return(params.new)
}

calcMlEstimMeans <- function(data, membership.probas, sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sum(unlist(Map("*", membership.probas[,k], data))) /
      sum.membership.probas[k]
  })
}

calcMlEstimStdDevs <- function(data, means, membership.probas,
                              sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    sqrt(sum(unlist(Map(function(p.ki, x.i){
      p.ki * (x.i - means[k])^2
    }, membership.probas[,k], data))) /
    sum.membership.probas[k])
  })
}

calcMlEstimMixWeights <- function(data, membership.probas,
                                  sum.membership.probas){
  K <- ncol(membership.probas)
  sapply(1:K, function(k){
    1/length(data) * sum.membership.probas[k]
  })
}

Log-likelihood

logLikelihood <- function(data, mus, sigmas, mix.weights){
  loglik <- sum(sapply(data, function(x){
    log(sum(unlist(Map(function(mu, sigma, mix.weight){
      mix.weight * calcUnivNormDens(x, mu, sigma)
    }, mus, sigmas, mix.weights))))
  }))
  return(loglik)
}

EM algorithm

EMalgo <- function(data, params, threshold.convergence=10^(-2), nb.iter=10,
                   verbose=1){
  logliks <- vector()
  i <- 1
  if(verbose > 0) cat(paste("iter ", i, "\n", sep=""))
  membership.probas <- Estep(data, params)
  params <- Mstep(data, params, membership.probas)
  loglik <- logLikelihood(data, params$mus, params$sigmas,
                          params$mix.weights)
  logliks <- append(logliks, loglik)
  while(i < nb.iter){
    i <- i + 1
    if(verbose > 0) cat(paste("iter ", i, "\n", sep=""))
    membership.probas <- Estep(data, params)
    params <- Mstep(data, params, membership.probas)
    loglik <- logLikelihood(data, params$mus, params$sigmas,
                            params$mix.weights)
    if(loglik < logliks[length(logliks)]){
      msg <- paste("the log-likelihood is decreasing:",
                   loglik, "<", logliks[length(logliks)])
      stop(msg, call.=FALSE)
    }
    logliks <- append(logliks, loglik)
    if(abs(logliks[i] - logliks[i-1]) <= threshold.convergence)
      break
  }
  return(list(params=params, membership.probas=membership.probas,
              logliks=logliks, nb.iters=i))
}

Evaluation

Simulate some data:

K <- 3
N <- 300
simul <- simulUnivData(K, N)
data <- simul$obs

Run the EM algorithm:

params0 <- list(mus=runif(n=K, min=min(data), max=max(data)),
                sigmas=rep(1, K),
                mix.weights=rep(1/K, K))
res <- EMalgo(data, params0, 10^(-3), 1000, 1)

Check its convergence:

plot(res$logliks, xlab="iterations", ylab="log-likelihood",
     main="Convergence of the EM algorithm", type="b")

Plot the data along with the inferred densities:

hist(data, breaks=30, freq=FALSE, col="grey", border="white",
     ylim=c(0,0.15), las=1,
     main="Histogram of data overlaid with densities inferred by EM")
rx <- seq(from=min(data), to=max(data), by=0.1)
ds <- lapply(1:K, function(k){
  dnorm(x=rx, mean=res$params$mus[k], sd=res$params$sigmas[k])
})
f <- sapply(1:length(rx), function(i){
  res$params$mix.weights[1] * ds[[1]][i] +
    res$params$mix.weights[2] * ds[[2]][i] +
    res$params$mix.weights[3] * ds[[3]][i]
})
lines(rx, f, col="red", lwd=2)

Beyond

References

Appendix

print(sessionInfo(), locale=FALSE)


timflutre/rutilstimflutre documentation built on Feb. 7, 2024, 8:17 a.m.