Description Usage Arguments Value References Examples
Viterbi algorithm to decode the latent states for Gaussian hidden Markov model with / without autoregressive structures
1 | viterbi.hmm(y, mod)
|
y |
observed series |
mod |
list consisting the at least the following items: mod$m = scalar number of states, mod$delta = vector of initial values for prior probabilities, mod$gamma = matrix of initial values for state transition probabilies. mod$mu = list of initial values for means, mod$sigma = list of initial values for covariance matrices. For autoregressive hidden markov models, we also need the additional items: mod$arp = scalar order of autoregressive structure mod$auto = list of initial values for autoregressive coefficient matrices |
a list containing the decoded states
Rabiner, Lawrence R. "A tutorial on hidden Markov models and selected applications in speech recognition." Proceedings of the IEEE 77.2 (1989): 257-286.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | set.seed(135)
m <- 2
mu <- list(c(3,4,5),c(-2,-3,-4))
sigma <- list(diag(1.3,3),
matrix(c(1,-0.3,0.2,-0.3,1.5,0.3,0.2,0.3,2),3,3,byrow=TRUE))
delta <- c(0.5,0.5)
gamma <- matrix(c(0.8,0.2,0.1,0.9),2,2,byrow=TRUE)
auto <- list(matrix(c(0.3,0.2,0.1,0.4,0.3,0.2,
-0.3,-0.2,-0.1,0.3,0.2,0.1,
0,0,0,0,0,0),3,6,byrow=TRUE),
matrix(c(0.2,0,0,0.4,0,0,
0,0.2,0,0,0.4,0,
0,0,0.2,0,0,0.4),3,6,byrow=TRUE))
mod <- list(m=m,mu=mu,sigma=sigma,delta=delta,gamma=gamma,auto=auto,arp=2)
sim <- hmm.sim(2000,mod)
y <- sim$series
state <- sim$state
fit <- em.hmm(y=y, mod=mod, arp=2)
state_est <- viterbi.hmm(y=y,mod=fit)
sum(state_est!=state)
|
iteration 1 ; loglik = -10163.31
iteration 2 ; loglik = -10128.05
iteration 3 ; loglik = -10128.05
[1] 0
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.