View source: R/Baum_Welch_algorithm.R
Baum_Welch_algorithm | R Documentation |
Estimates the parameters of a (non-stationary) discrete-time hidden Markov model. The Baum-Welch algorithm is a version of the EM (Estimation/Maximization) algorithm. See MacDonald & Zucchini (2009, Paragraph 4.2) for further details.
Baum_Welch_algorithm(
x,
m,
delta,
gamma,
distribution_class,
distribution_theta,
discr_logL = FALSE,
discr_logL_eps = 0.5,
BW_max_iter = 50,
BW_limit_accuracy = 0.001,
BW_print = TRUE,
Mstep_numerical = FALSE,
DNM_limit_accuracy = 0.001,
DNM_max_iter = 50,
DNM_print = 2
)
x |
a vector object containing the time-series of observations that are assumed to be realizations of the (hidden Markov state dependent) observation process of the model. |
m |
integer; a (finite) number of states in the hidden Markov chain. |
delta |
vector object containing starting values for the marginal probability
distribution of the |
gamma |
a matrix ( |
distribution_class |
a single character string object with the abbreviated name of
the |
distribution_theta |
a list object containing starting values for the parameters
of the |
discr_logL |
a logical object indicating whether the discrete log-likelihood
should be used (for |
discr_logL_eps |
a single numerical value to approximately determine the discrete
likelihood for a hidden Markov model based on nomal distributions
(for |
BW_max_iter |
a single numerical value representing the maximum number of iterations
in the Baum-Welch algorithm. Default value is |
BW_limit_accuracy |
a single numerical value representing the convergence criterion
of the Baum-Welch algorithm. Default value is |
BW_print |
a logical object indicating whether the log-likelihood at each
iteration-step shall be printed. Default value is |
Mstep_numerical |
a logical object indicating whether the Maximization Step of the
Baum-Welch algorithm shall be performed by numerical maximization using the
nlm-function. Default value is |
DNM_limit_accuracy |
a single numerical value representing the convergence
criterion of the numerical maximization algorithm using the
nlm-function (used to perform the M-step of the
Baum-Welch-algorithm). Default value is |
DNM_max_iter |
a single numerical value representing the maximum number of iterations
of the numerical maximization using the nlm-function
(used to perform the M-step of the Baum-Welch-algorithm). Default value is |
DNM_print |
a single numerical value to determine the level of printing of
the |
Baum_Welch_algorithm
returns a list containing the estimated parameters
of the hidden Markov model and other components. See MacDonald & Zucchini (2009, Paragraph 4.2)
for further details on the calculated objects within this algorithm.
input time-series of observations.
input number of hidden states in the Markov chain.
a (T,m)-matrix (when T indicates the length/size of the observation time-series and m the number of states of the HMM) containing probabilities (estimates of the conditional expectations of the missing data given the observations and the estimated model specific parameters) calculated by the algorithm. See MacDonald & Zucchini (2009, Paragraph 4.2.2) for further details.
a (T,m,m)-dimensional-array (when T indicates the length of the observation time-series and m the number of states of the HMM) containing probabilities (estimates of the conditional expectations of the missing data given the observations and the estimated model specific parameters) calculated by the algorithm. See MacDonald & Zucchini (2009, Paragraph 4.2.2) for further details.
a numerical value representing the logarithmized likelihood calculated by
the forward_backward_algorithm
.
number of performed iterations.
a numerical value representing the Bayesian information criterion for the hidden Markov model with estimated parameters.
a vector object containing the estimates for the marginal probability
distribution of the m
states of the Markov chain at
time-point point t=1
.
a matrix containing the estimates for the transition matrix of the hidden Markov chain.
other input values (as arguments above). In the case that the algorithm stops before the targeted accuracy or the maximum number of iterations has been reached, further values are displayed and the estimates from the last successful iteration step are saved.
The basic algorithm for a Poisson-HMM is provided by MacDonald & Zucchini (2009, Paragraph 4.2, Paragraph A.2.3). Extension and implementation by Vitali Witowski (2013).
Baum, L., Petrie, T., Soules, G., Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of markov chains. The annals of mathematical statistics, vol. 41(1), 164–171.
Dempster, A., Laird, N., Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), vol. 39(1), 1–38.
MacDonald, I. L., Zucchini, W. (2009) Hidden Markov Models for Time Series: An Introduction Using R, Boca Raton: Chapman & Hall.
HMM_based_method
, HMM_training
,
direct_numerical_maximization
, forward_backward_algorithm
,
initial_parameter_training
x <- c(1,16,19,34,22,6,3,5,6,3,4,1,4,3,5,7,9,8,11,11,
14,16,13,11,11,10,12,19,23,25,24,23,20,21,22,22,18,7,
5,3,4,3,2,3,4,5,4,2,1,3,4,5,4,5,3,5,6,4,3,6,4,8,9,12,
9,14,17,15,25,23,25,35,29,36,34,36,29,41,42,39,40,43,
37,36,20,20,21,22,23,26,27,28,25,28,24,21,25,21,20,21,
11,18,19,20,21,13,19,18,20,7,18,8,15,17,16,13,10,4,9,
7,8,10,9,11,9,11,10,12,12,5,13,4,6,6,13,8,9,10,13,13,
11,10,5,3,3,4,9,6,8,3,5,3,2,2,1,3,5,11,2,3,5,6,9,8,5,
2,5,3,4,6,4,8,15,12,16,20,18,23,18,19,24,23,24,21,26,
36,38,37,39,45,42,41,37,38,38,35,37,35,31,32,30,20,39,
40,33,32,35,34,36,34,32,33,27,28,25,22,17,18,16,10,9,
5,12,7,8,8,9,19,21,24,20,23,19,17,18,17,22,11,12,3,9,
10,4,5,13,3,5,6,3,5,4,2,5,1,2,4,4,3,2,1)
# Assumptions (number of states, probability vector,
# transition matrix, and distribution parameters)
m <- 4
delta <- c(0.25,0.25,0.25,0.25)
gamma <- 0.7 * diag(m) + rep(0.3 / m)
distribution_class <- "pois"
distribution_theta <- list(lambda = c(4,9,17,25))
# Estimation of a HMM using the Baum-Welch algorithm
trained_HMM_with_m_hidden_states <-
Baum_Welch_algorithm(x = x,
m = m,
delta = delta,
gamma = gamma,
distribution_class = distribution_class,
distribution_theta = distribution_theta)
print(trained_HMM_with_m_hidden_states)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.