Viterbi: Viterbi Algorithm for Hidden Markov Model

Description Usage Arguments Details Value References Examples

Description

Provides methods for the generic function Viterbi. This predicts the most likely sequence of Markov states given the observed dataset. There is currently no method for objects of class "mmpp".

Usage

1
2
3
4
5
6
7
8
## S3 method for class 'dthmm'
Viterbi(object, ...)
## S3 method for class 'mmglm0'
Viterbi(object, ...)
## S3 method for class 'mmglm1'
Viterbi(object, ...)
## S3 method for class 'mmglmlong1'
Viterbi(object, ...)

Arguments

object

an object with class "dthmm", "mmglm0", "mmglm1" or "mmglmlong1".

...

other arguments.

Details

The purpose of the Viterbi algorithm is to globally decode the underlying hidden Markov state at each time point. It does this by determining the sequence of states (k_1^*, ..., k_n^*) which maximises the joint distribution of the hidden states given the entire observed process, i.e.

(k_1^*, ..., k_n^*) = argmax Pr{ C_1=k_1, ..., C_n=k_n | X^{(n)}=x^{(n)} } .

The algorithm has been taken from Zucchini (2005), however, we calculate sums of the logarithms of probabilities rather than products of probabilities. This lessens the chance of numerical underflow. Given that the logarithmic function is monotonically increasing, the argmax will still be the same. Note that argmax can be evaluated with the R function which.max.

Determining the a posteriori most probable state at time i is referred to as local decoding, i.e.

k_i^* = argmax Pr{ C_i=k | X^{(n)}=x^{(n)} } .

Note that the above probabilities are calculated by the function Estep, and are contained in u[i,j] (output from Estep), i.e. k_i^* is simply which.max(u[i,]).

The code for the methods "dthmm", "mmglm0", "mmglm1" and "mmglmlong1" can be viewed by appending Viterbi.dthmm, Viterbi.mmglm0, Viterbi.mmglm1 or Viterbi.mmglmlong1, respectively, to HiddenMarkov:::, on the R command line; e.g. HiddenMarkov:::dthmm. The three colons are needed because these method functions are not in the exported NAMESPACE.

Value

A vector of length n containing integers (1, ..., m) representing the hidden Markov states for each node of the chain.

References

Cited references are listed on the HiddenMarkov manual page.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Pi <- matrix(c(1/2, 1/2,   0,   0,   0,
               1/3, 1/3, 1/3,   0,   0,
                 0, 1/3, 1/3, 1/3,   0,
                 0,   0, 1/3, 1/3, 1/3,
                 0,   0,   0, 1/2, 1/2),
             byrow=TRUE, nrow=5)
delta <- c(0, 1, 0, 0, 0)
lambda <- c(1, 4, 2, 5, 3)
m <- nrow(Pi)

x <- dthmm(NULL, Pi, delta, "pois", list(lambda=lambda), discrete=TRUE)
x <- simulate(x, nsim=2000)

#------  Global Decoding  ------

states <- Viterbi(x)
states <- factor(states, levels=1:m)

#  Compare predicted states with true states
#  p[j,k] = Pr{Viterbi predicts state k | true state is j}
p <- matrix(NA, nrow=m, ncol=m)
for (j in 1:m){
    a <- (x$y==j)
    p[j,] <- table(states[a])/sum(a)
}
print(p)

#------  Local Decoding  ------

#   locally decode at i=100

print(which.max(Estep(x$x, Pi, delta, "pois", list(lambda=lambda))$u[100,]))

#---------------------------------------------------
#   simulate a beta HMM

Pi <- matrix(c(0.8, 0.2,
               0.3, 0.7),
             byrow=TRUE, nrow=2)
delta <- c(0, 1)

y <- seq(0.01, 0.99, 0.01)
plot(y, dbeta(y, 2, 6), type="l", ylab="Density", col="blue")
points(y, dbeta(y, 6, 2), type="l", col="red")

n <- 100
x <- dthmm(NULL, Pi, delta, "beta",
           list(shape1=c(2, 6), shape2=c(6, 2)))
x <- simulate(x, nsim=n)

#   colour denotes actual hidden Markov state
plot(1:n, x$x, type="l", xlab="Time", ylab="Observed Process")
points((1:n)[x$y==1], x$x[x$y==1], col="blue", pch=15)
points((1:n)[x$y==2], x$x[x$y==2], col="red", pch=15)

states <- Viterbi(x)
#   mark the wrongly predicted states
wrong <- (states != x$y)
points((1:n)[wrong], x$x[wrong], pch=1, cex=2.5, lwd=2)

Example output

           [,1]       [,2]       [,3]        [,4]       [,5]
[1,] 0.83076923 0.07179487 0.02051282 0.005128205 0.07179487
[2,] 0.21917808 0.53424658 0.03287671 0.106849315 0.10684932
[3,] 0.34838710 0.13763441 0.13978495 0.038709677 0.33548387
[4,] 0.03502627 0.23117338 0.04378284 0.427320490 0.26269702
[5,] 0.09653465 0.11881188 0.13118812 0.146039604 0.50742574
[1] 3

HiddenMarkov documentation built on April 27, 2021, 5:06 p.m.