Description Usage Arguments Details Value Author(s) References See Also Examples
Given an input read count vector of integers, the function optimizes the parameters for the negative binomial HMM of K hidden states using expectation conditional maximization with forward-backward algorithm to acheive the exact inference.
1 2 |
count |
A vector of integers, conceptaully representative of the read counts within bins of chromosome. |
TRANS |
Transition probability matrix, a squared matrix of probabilities (0 ≤ p ≤ 1) with row and column length equal to that of alpha and beta and row sum and column sum both equal to 1 (within some numerical deviation of 1e-6). |
alpha |
Shape parameter of the NB as a vector of positive values with length equal to that of beta and the row/column of TRANS. |
beta |
Inverse scale parameter of the NB as a vector of positive values with length equal to that of beta and the row/column of TRANS. |
NBH_NIT_MAX |
Maximum number of EM iterations (Default: 250) for the negative binomial hidden Markov model (NBH). |
NBH_TOL |
Threshold as fraction of increase in likelihood (given the current NBH parameters) comparing with the likelihood from the last iteration. EM for the NBH stops when the improvement is below the threshold (Default: 0.001). |
MAXALPHA |
The maximum value of alpha in case the update goes beyond the numerical upper limit of the system. Once alpha becomes larger than |
MAXBETA |
The maximum value of beta in case the update goes beyond the numerical upper limit of the system. Once beta becomes larger than |
Given a K-state HMM with NB emission (NBH), the goal is to maximize the likelihood function with respect to the parameters comprising of α_k and β_k for the K NB components and the transition probabilities A_jk between any state j and k, which are the priors p(z=k). Because there is no analytical solution for the maximum likelihood (ML) estimators of the above quantities, a modified EM procedures called Expectation Conditional Maximization is employed (Meng and Rubin, 1994).
In E-step, the posterior probability is evaluated by forward-backward algorithm using NB density functions with initialized alpha, beta, and TRANS. In the CM step, A_jk is evaluated first followed by Newton updates of α_k and β_k. EM iteration terminates when the percetnage of increase of log likelihood drop below NBH_TOL
, which is deterministic since EM is guaranteed to converge. For more details, please see the manuscript of RIPSeeker.
A list containing:
alpha |
optimized alpha_k for NB at state K |
beta |
optimized beta_k for NB at state K |
TRANS |
optimized transition probability matrix |
logl |
Log likelihood in each EM iteration. |
postprob |
Posterior probabilities for each observed data point at the last EM iteration. |
dens |
the negative binomial probabilities computed at the last EM iteration |
Yue Li
Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition (Vol. 77, pp. 257-286). Presented at the Proceedings of the IEEE. doi:10.1109/5.18626
Christopher Bishop. Pattern recognition and machine learning. Number 605-631 in Information Science and Statisitcs. Springer Science, 2006.
X. L. Meng, D. B. Rubin, Maximum likelihood estimation via the ECM algorithm: A general framework, Biometrika, 80(2):267-278 (1993).
J. A. Fessler, A. O. Hero, Space-alternating generalized expectation-maximization algorithm, IEEE Tr. on Signal Processing, 42(10):2664 -2677 (1994).
Capp\'e, O. (2001). H2M : A set of MATLAB/OCTAVE functions for the EM estimation of mixtures and hidden Markov models. (http://perso.telecom-paristech.fr/cappe/h2m/)
nbh_init, nbh, nbh.GRanges, nbh_vit,nbm_em
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | # Simulate data
TRANS_s <- matrix(c(0.9, 0.1, 0.3, 0.7), nrow=2, byrow=TRUE)
alpha_s <- c(2, 4)
beta_s <- c(1, 0.25)
Total <- 100
x <- nbh_gen(TRANS_s, alpha_s, beta_s, Total);
count <- x$count
label <- x$label
Total <- length(count)
# dummy initialization
TRANS0 <- matrix(rep(0.5,4), 2)
alpha0 <- c(1, 20)
beta0 <- c(1, 1)
NIT_MAX <- 50
TOL <- 1e-100
nbh <- nbh_em(count, TRANS0, alpha0, beta0, NIT_MAX, TOL)
map.accuracy <- length(which(max.col(nbh$postprob) == label))/Total
vit <- nbh_vit(count, nbh$TRANS, nbh$alpha, nbh$beta)
vit.accuracy <- length(which(vit$class == label))/Total
# Plots
par(mfrow=c(2,2), cex.lab=1.2, cex.main=1.2)
plot(count, col="blue", type="l", main=sprintf("A. Simulated Data (Total = %i)",Total))
plot(as.numeric(nbh$logl), xlab="EM Iteration", ylab="Log-Likelihood",
main="B. Log-Likelihood via EM");grid()
# Marginal postprob
plot(nbh$postprob[,2], col="blue", type="l", ylim = c(0,1),
ylab="Marginal Posteriror or True State")
points(label-1, col="red")
title(main = sprintf("C. MAP Prediciton Accuracy = %.2f%s", 100 * map.accuracy, "%"))
# Viterbi states
plot(vit$class - 1, col="dark green", type="l", ylim = c(0,1),
ylab="Viterbi or True State")
points(label-1, col="red")
title(main = sprintf("D. Viterbi Prediciton Accuracy = %.2f%s", 100 * vit.accuracy, "%"))
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.