viterbiTraining: Estimate HMM Parameters Using Viterbi Training

Description Usage Arguments Value Author(s) References See Also Examples

Description

Viterbi training is a faster but less reliable alternative to Baum-Welch for parameter estimation.

Usage

1
2
3
## S4 method for signature 'hmm,list'
viterbiTraining(hmm, obs, max.iter=10, eps=0.01, 
        df=NULL, trans.prior=NULL, init.prior=NULL, keep.models=NULL, verbose=1)

Arguments

hmm

Object of class hmm.

obs

List of observation sequences.

max.iter

Maximum number of iterations.

eps

Minimum change in log likelihood between successive iterations.

df

If this is NULL the degrees of freedom for the t distributions are e stimated from the data. Otherwise they are set to df.

trans.prior

Prior distribution of transition probabilities. A prior can be specified either by providing a matrix with transition probabilities or by setting trans.prior=TRUE. In the latter case the initial parameter estimates are used as prior. If trans.prior is NULL (the default) no prior is used.

init.prior

Prior distribution of initial state probabilities. A prior can be specified either by providing a vector with initial state probabilities or by setting init.prior=TRUE. In the latter case the initial parameter estimates are used as prior. If init.prior is NULL (the default) no prior is used.

keep.models

A character string interpreted as a file name. If keep.models is not NULL the models produced during the parameter estimation procedure are saved to a file.

verbose

Level of verbosity. Allows some control over the amount of output printed to the console.

Value

Object of class hmm with the pest parameter estimates (in terms of likelihood) found during the fitting procedure.

Author(s)

Peter Humburg

References

Juang, B.-H. and Rabiner, L. R. 1990 A segmental k-means algorithm for estimating parameters of hidden Markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 38(9), 1639–1641.

See Also

viterbi, baumWelch, viterbiEM, hmm.setup

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
## create two state HMM with t distributions
state.names <- c("one","two")
transition <- c(0.035, 0.01)
location <- c(1, 2)
scale <- c(1, 1)
df <- c(4, 6)
hmm1 <- getHMM(list(a=transition, mu=location, sigma=scale, nu=df), 
    state.names)

## generate observation sequences from model
obs.lst <- list()
for(i in 1:50) obs.lst[[i]] <- sampleSeq(hmm1, 100)

## fit an HMM to the data (with fixed degrees of freedom)
hmm2 <- hmm.setup(obs.lst, state=c("one","two"), df=5)
hmm2.fit <- viterbiTraining(hmm2, obs.lst, max.iter=20, df=5, verbose=1)

## fit an HMM to the data, this time estimating the degrees of freedom
hmm3.fit <- viterbiTraining(hmm2, obs.lst, max.iter=20, verbose=1)

humburg/tileHMM documentation built on May 17, 2019, 9:13 p.m.