Description Usage Arguments Value References Examples
EM algorithm to compute maximum likelihood estimate of Gaussian hidden Markov models with / without autoregressive structures and with / without regularization on the covariance matrices and/or autoregressive structures.
1 2 |
y |
observed series |
mod |
list consisting the at least the following items: mod$m = scalar number of states, mod$delta = vector of initial values for prior probabilities, mod$gamma = matrix of initial values for state transition probabilies. mod$mu = list of initial values for means, mod$sigma = list of initial values for covariance matrices. For autoregressive hidden markov models, we also need the additional items: mod$arp = scalar order of autoregressive structure mod$auto = list of initial values for autoregressive coefficient matrices |
ntimes |
length of each homogeneous time series. Default to NULL, which means only homogeneous time series. |
tol |
tolerance for relative change. Default to 1e-4. |
maxit |
maximum number of iterations. Default to 100. |
arp |
order of autoregressive. Default to 0. |
cov.shrink |
shrinkage on the multivariate normal covariance matrix. Default to 0. See references. |
auto.lambda |
elastic net shrinkage on the autoregressive coefficients. Default to 0. See references. |
auto.alpha |
The elasticnet mixing parameter, with 0<=alpha<=1. The penalty is defined as (1-alpha)/2||.||_2^2+alpha||.||_1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty. Default to 0. Same as in the glmnet package. |
print |
Default to TRUE. |
a list containing the fitted parameters.
Rabiner, Lawrence R. "A tutorial on hidden Markov models and selected applications in speech recognition." Proceedings of the IEEE 77.2 (1989): 257-286.
Zou, Hui, and Trevor Hastie. "Regularization and variable selection via the elastic net." Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67.2 (2005): 301-320.
Ledoit, Olivier, and Michael Wolf. "A well-conditioned estimator for large-dimensional covariance matrices." Journal of multivariate analysis 88.2 (2004): 365-411.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | set.seed(332213)
data(finance)
x <- data.matrix(finance)
#log return
y <- x[-1,-51]
for(i in 2:nrow(x)){
y[i-1,] <- log(x[i,-51]) - log(x[i-1,-51])
}
#annualize the log return
y <- y * 252
#first, fit a Gaussian HMM without autoregressive structure
m <- 2
#initialize the list of means
mu <- list(apply(y,2,mean), apply(y,2,mean))
#initialize the list of covariance matrices
sigma <- list(cov(y)*1.2,cov(y)*0.8)
#initialize the prior probability
delta <- c(0.5,0.5)
#initialize the transition probabilities
gamma <- matrix(c(0.9,0.1,0.2,0.8),2,2,byrow=TRUE)
mod1 <- list(m=m,mu=mu,sigma=sigma,delta=delta,gamma=gamma)
#will not run without a shrinkage on the covariance matrices because the
#series is not long enough to reliably estimate the covariance structure
fit1 <- em.hmm(y=y,mod=mod1,cov.shrink=0.0001)
st1 <- viterbi.hmm(y=y,mod=fit1)
sp1 <- smooth.hmm(y=y,mod=fit1)
## Not run:
#second, fit a Gaussian HMM with 1st order autoregressive structure
auto <- list(matrix(rep(0,2500),50,50,byrow=TRUE),
matrix(rep(0,2500),50,50,byrow=TRUE))
mod2 <- list(m=m,mu=mu,sigma=sigma,delta=delta,gamma=gamma,auto=auto)
fit2 <- em.hmm(y=y,mod=mod2,ntimes=NULL,cov.shrink=0.0001,arp=1,
auto.alpha=1,auto.lambda=0.1)
st2 <- viterbi.hmm(y=y,mod=fit2)
sp2 <- smooth.hmm(y=y,mod=fit2)
#third, fit a Gaussian HMM with 2nd order autoregressive structure
auto <- list(matrix(rep(0,5000),50,100,byrow=TRUE),
matrix(rep(0,5000),50,100,byrow=TRUE))
mod3 <- list(m=m,mu=mu,sigma=sigma,delta=delta,gamma=gamma,auto=auto)
fit3 <- em.hmm(y=y,mod=mod3,ntimes=NULL,cov.shrink=0.0001,arp=2,
auto.alpha=1,auto.lambda=0.1)
st3 <- viterbi.hmm(y=y,mod=fit3)
sp3 <- smooth.hmm(y=y,mod=fit3)
## End(Not run)
|
iteration 1 ; loglik = -73282.2
iteration 2 ; loglik = -70225.61
iteration 3 ; loglik = -69989.01
iteration 4 ; loglik = -69949.35
iteration 5 ; loglik = -69936.52
iteration 6 ; loglik = -69935.99
iteration 7 ; loglik = -69930
iteration 8 ; loglik = -69925.73
iteration 1 ; loglik = -73282.2
iteration 2 ; loglik = -67067.97
iteration 3 ; loglik = -66552.5
iteration 4 ; loglik = -66498.14
iteration 1 ; loglik = -73282.2
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.