viterbi | R Documentation |
Calculates “the” most probable state sequence underlying each of one or more replicate observation sequences.
viterbi(y, model = NULL, tpm, Rho, ispd=NULL,log=FALSE, warn=TRUE)
y |
The observations for which the most probable sequence(s)
of underlying hidden states are required. May be a sequence of
observations in the form of a vector or a one or two column matrix,
or a list each component of which constitutes a (replicate)
sequence of observations. It may also be an object of class
If |
model |
An object describing a hidden Markov model, as
fitted to the data set |
tpm |
The transition probability matrix for a hidden
Markov model; ignored if |
Rho |
An object specifying the probability distributions
of the observations for a hidden Markov model. See
If |
ispd |
The initial state probability distribution for a hidden
Markov model; ignored if |
log |
Logical scalar. Should logarithms be used in the
recursive calculations of the probabilities involved in the
Viterbi algorithm, so as to avoid underflow? If |
warn |
Logical scalar; should a warning be issued if |
Applies the Viterbi algorithm to calculate “the” most probable robable state sequence underlying each observation sequences.
If y
consists of a single observation sequence, the
value is the underlying most probable observation sequence,
or a matrix whose columns consist of such sequences if there
is more than one (equally) most probable sequence.
If y
consists of a list of observation sequences, the
value is a list each entry of which is of the form described
above.
If y
is of class "multipleHmmDataSets"
then the
value returned is a list of lists of the sort described above.
There may be more than one equally most probable state sequence underlying a given observation sequence. This phenomenon can occur but appears to be unlikely to do so in practice.
The correction made to the code so as to avoid underflow problems was made due to an inquiry and suggestion from Owen Marshall.
Rolf Turner
r.turner@auckland.ac.nz
Rabiner, L. R., "A tutorial on hidden Markov models and selected applications in speech recognition," Proc. IEEE vol. 77, pp. 257 – 286, 1989.
hmm()
, rhmm()
,
mps()
, pr()
# See the help for logLikHmm() for how to generate y.num and y.let. ## Not run: fit.num <- hmm(y.num,K=2,verb=TRUE,keep.y=TRUE) v.1 <- viterbi(model=fit.num) rownames(R) <- 1:5 # Avoids a (harmless) warning. v.2 <- viterbi(y.num,tpm=P,Rho=R) # P and R as in the help for logLikHmm() and for sp(). # Note that the order of the states has gotten swapped; 3-v.1[[1]] # is identical to v.2[[1]]; for other k = 2, ..., 20, 3-v.1[[k]] # is much more similar to v.2[[k]] than is v.1[[k]]. fit.let <- hmm(y.let,K=2,verb=TRUE,keep.y=TRUE)) v.3 <- viterbi(model=fit.let) rownames(R) <- letters[1:5] v.4 <- viterbi(y.let,tpm=P,Rho=R) ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.