llnorMix: Likelihood, Parametrization and EM-Steps For 1D Normal...

View source: R/norMix.R

llnorMixR Documentation

Likelihood, Parametrization and EM-Steps For 1D Normal Mixtures

Description

These functions work with an almost unconstrained parametrization of univariate normal mixtures.

llnorMix(p, *)

computes the log likelihood,

obj <- par2norMix(p)

maps parameter vector p to a norMix object obj,

p <- nM2par(obj)

maps from a norMix object obj to parameter vector p,

where p is always a parameter vector in our parametrization.

Partly for didactical reasons, the following functions provide the basic ingredients for the EM algorithm (see also norMixEM) to parameter estimation:

estep.nm(x, obj, p)

computes 1 E-step for the data x, given either a "norMix" object obj or parameter vector p.

mstep.nm(x, z)

computes 1 M-step for the data x and the probability matrix z.

emstep.nm(x, obj)

computes 1 E- and 1 M-step for the data x and the "norMix" object obj.

where again, p is a parameter vector in our parametrization, x is the (univariate) data, and z is a n \times m matrix of (posterior) conditional probabilities, and \theta is the full parameter vector of the mixture model.

Usage

llnorMix(p, x, m = (length(p) + 1)/3, trafo = c("clr1", "logit"))

par2norMix(p, trafo = c("clr1", "logit"), name = )

nM2par(obj, trafo = c("clr1", "logit"))

 estep.nm(x, obj, par)
 mstep.nm(x, z)
emstep.nm(x, obj)

Arguments

p, par

numeric vector: our parametrization of a univariate normal mixture, see details.

x

numeric: the data for which the likelihood is to be computed.

m

integer number of mixture components; this is not to be changed for a given p.

trafo

character string specifying the transformation of the component weight w m-vector (mathematical notation in norMix: \pi_j, j=1,\dots,m) to an m-1-dimensional unconstrained parameter vector in our parametrization. "logit" has been hard-wired upto nor1mix version 1.2-3, and has been replaced as default in 2019 for nor1mix version 1.2-4 by "clr1" which is more symmetric and basically Aitchinson's centered log ratio, see also CRAN package compositions' clr().

name

(for par2norMix():) a name for the "norMix" object that is returned, uses a smart default.

obj

a "norMix" object, see norMix.

z

a n \times m matrix of (posterior) conditional probabilities, z_{ij}= P(x_i \in C_j | \theta), where C_j denotes the j-th group (“cluster”).

Details

We use a parametrization of a (finite) m-component univariate normal mixture which is particularly apt for likelihood maximization, namely, one whose parameter space is almost a full \mathbf{I\hskip-0.22em R}^M, M = 3m-1.

For an m-component mixture, we map to and from a parameter vector \theta (== p as R-vector) of length 3m-1. For mixture density

\sum_{j=1}^m \pi_j \phi((t - \mu_j)/\sigma_j)/\sigma_j,

we transform the \pi_j (for j \in 1,\dots,m) via the transform specified by trafo (see below), and log-transform the \sigma_j. Consequently, \theta is partitioned into

p[ 1:(m-1)]:

For

trafo = "logit":

p[j]= \mbox{logit}(\pi_{j+1}) and \pi_1 is given implicitly as \pi_1 = 1-\sum_{j=2}^m \pi_j.

trafo = "clr1":

(centered log ratio, omitting 1st element): Set \ell_j := \ln(\pi_j) for j = 1, \dots, m, and p[j]= \ell_{j+1} - 1/m\sum_{j'=1}^m \ell_{j'} for j = 1, \dots, m-1.

p[ m:(2m-1)]:

p[m-1+ j]= \mu_j, for j=1:m.

p[2m:(3m-1)]:

p[2*m-1+ j] = \log(\sigma_j), i.e., \sigma_j^2 = exp(2*p[.+j]).

Value

llnorMix() returns a number, namely the log-likelihood.

par2norMix() returns "norMix" object, see norMix.

nM2par() returns the parameter vector \theta of length 3m-1.

estep.nm() returns z, the matrix of (conditional) probabilities.

mstep.nm() returns the model parameters as a list with components w, mu, and sigma, corresponding to the arguments of norMix(). (and see the 'Examples' on using do.call(norMix, *) with it.)

emstep.nm() returns an updated "norMix" object.

Author(s)

Martin Maechler

See Also

norMix, logLik. Note that the log likelihood of a "norMix" object is directly given by sum(dnorMix(x, obj, log=TRUE)).

To fit, using the EM algorithm, rather use norMixEM() than the e.step, m.step, or em.step functions.

Note that direct likelihood maximization, i.e., MLE, is typically considerably more efficient than the EM, and typically converges well with our parametrization, see norMixMLE.

Examples

(obj <- MW.nm10) # "the Claw" -- m = 6 components
length(pp <- nM2par(obj)) # 17 == (3*6) - 1
par2norMix(pp)
## really the same as the initial 'obj' above

## Log likelihood (of very artificial data):
llnorMix(pp, x = seq.int(-2, 2, length.out = 1000))
set.seed(47)## of more realistic data:
x <- rnorMix(1000, obj)
llnorMix(pp, x)

## Consistency check :  nM2par()  and  par2norMix()  are inverses
all.EQ <- function(x,y, tol = 1e-15, ...) all.equal(x,y, tolerance=tol, ...)
stopifnot(all.EQ(pp, nM2par(par2norMix(pp))),
          all.EQ(obj, par2norMix(nM2par(obj)),
                    check.attributes=FALSE),
          ## Direct computation of log-likelihood:
          all.EQ(sum(dnorMix(x, obj, log=TRUE)),
                    llnorMix(pp, x))  )

## E- and M- steps : ------------------------------
rE1 <- estep.nm(x, obj)
rE2 <- estep.nm(x, par=pp) # the same as rE1
z <- rE1
str( rM <-  mstep.nm(x, z))
   (rEM <- emstep.nm(x, obj))

stopifnot(all.EQ(rE1, rE2),
          all.EQ(rEM, do.call(norMix, c(rM, name=""))))

nor1mix documentation built on May 29, 2024, 8:28 a.m.