baumWelch | R Documentation |
For an initial Hidden Markov Model (HMM) and a given sequence of observations, the Baum-Welch algorithm infers optimal parameters to the HMM. Since the Baum-Welch algorithm is a variant of the Expectation-Maximisation algorithm, the algorithm converges to a local solution which might not be the global optimum.
baumWelch(hmm, observation, maxIterations=100, delta=1E-9, pseudoCount=0)
hmm |
A Hidden Markov Model. |
observation |
A sequence of observations. |
maxIterations |
The maximum number of iterations in the Baum-Welch algorithm. |
delta |
Additional termination condition, if the transition
and emission matrices converge, before reaching the maximum
number of iterations ( |
pseudoCount |
Adding this amount of pseudo counts in the estimation-step of the Baum-Welch algorithm. |
Dimension and Format of the Arguments.
A valid Hidden Markov Model, for example instantiated by initHMM
.
A vector of observations.
Return Values:
hmm |
The inferred HMM. The representation is equivalent to the
representation in |
difference |
Vector of differences calculated from consecutive transition and emission matrices in each iteration of the Baum-Welch procedure. The difference is the sum of the distances between consecutive transition and emission matrices in the L2-Norm. |
Lin Himmelmann <hmm@linhi.com>, Scientific Software Development
For details see: Lawrence R. Rabiner: A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE 77(2) p.257-286, 1989.
See viterbiTraining
.
# Initial HMM hmm = initHMM(c("A","B"),c("L","R"), transProbs=matrix(c(.9,.1,.1,.9),2), emissionProbs=matrix(c(.5,.51,.5,.49),2)) print(hmm) # Sequence of observation a = sample(c(rep("L",100),rep("R",300))) b = sample(c(rep("L",300),rep("R",100))) observation = c(a,b) # Baum-Welch bw = baumWelch(hmm,observation,10) print(bw$hmm)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.