forwardBackward: forward-backward function

View source: R/RHmm.R

forwardBackwardR Documentation

forward-backward function

Description

The forward-backward function is used to compute quantities used in the Baum-Welch algorithm.

Usage

forwardBackward(HMM, obs, logData=TRUE)

Arguments

HMM

a HMMClass or a HMMFitClass object

obs

a vector (matrix) of observations, or a list of vectors (or matrices) if there are more than one samples

logData

a boolean. If true, the function computes the logaritm of the Alpha, Beta and Rho quantities instead of the quantities themselves.

Value

If obs is one sample, a list of following elements, if obs is a list of samples, a list of list of following elements. See note for mathematical definitions.

Alpha

The matrix of (log) 'forward' probabilities (size: number of obs. times number of hidden states)

Beta

The matrix of (log) 'backward' probabilities (size: number of obs. times number of hidden states)

Gamma

The matrix of probabilities of being at time t in state i (size: number of obs. times number of hidden states)

Xsi

The matrix of probabilities of being in state i at time t and being in state j at time t + 1 (size: number of obs. times number of hidden states)

Rho

The vector of (log) probabilities of seeing the partial sequence obs[1] ... obs[t] (size number of obs.)

LLH

Log-likelihood

Note

Let o=(o(1),\,\ldots,\,o(T)) be the vector of observations, and O=(O(t), t=1,\,\ldots,\,T), the corresponding random variables. Let Q=(Q(t), t=1,\,\ldots,\,T) be the hidden Markov chain whose values are in \left\{1,\,\ldots,\,nStates\right\} We have the following definitions:

\alpha_i(t) = P(O_1=o(1),\,\ldots,\,O(t)=o(t),\,Q(t)=i\,|\,HMM) which is the probability of seeing the partial sequence o(1),\,\ldots,\,o(t) and ending up in state i at time t.

\beta_i(t) = P(O_{t+1}=o(t+1),\,\ldots,\,O(T)=o(T),\,Q(t)=i | HMM) which is the probability of the ending partial sequence o(t+1),\,\ldots,\,o(T) given that we started at state i at time t.

\Gamma_i(t) = P(Q(t) = i\,|\,O=o,\,HMM) which is the probability of being in state i at time t for the state sequence O=o.
\xi_{ij}(t)=P(Q(t)=i,\,Q(t+1)=j\,|\,O=o,\,HMM) which is the probability of being in state i at time t and being in state j at time t + 1.

\rho(t) = P(O_1=o(1),\,\ldots,\,O_t=o(t)\,|\, HMM) witch is probabilities of seeing the partial sequence o(1),\,\ldots,\,o(t).

LLH=\ln\rho[T]
When the sequences of observations become larger, the probabilistic values in this algorithm get increasingly small and after enough iterations become almost zero. For that reason, the Alphas, Betas and Rhos are scaled during the iterations of the algorithm to avoid undeflow problems. The logarithm of these probabilistic values are compute from the logarithm of the scaled quantities and should produce a more precise result.

References

Jeff A. Bilmes (1997) A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models http://ssli.ee.washington.edu/people/bilmes/mypapers/em.ps.gz

Examples

    data(n1d_3s)
    #Fits an 2 states gaussian model for geyser duration
    Res_n1d_3s <- HMMFit(obs_n1d_3s, nStates=3)
    #Forward-backward procedure with log Alpha, Beta, Rho
    fbLog <- forwardBackward(Res_n1d_3s, obs_n1d_3s)
    #Forward-backward procedure with Alpha Beta and Rho
    fb <- forwardBackward(Res_n1d_3s, obs_n1d_3s, FALSE)
  

RHmm documentation built on July 16, 2024, 3:03 a.m.

Related to forwardBackward in RHmm...