Description Usage Arguments Value References Examples
Distributed learning for a longitudinal continuous-time zero-inflated Poisson hidden Markov model, where zero-inflation only happens in State 1 and covariates are for state-dependent zero proportion and means. Assume that priors, transition rates, state-dependent intercepts and slopes can be subject-specific, clustered by group, or common. But at least one set of the parameters have to be common across all subjects.
1 2 3 4 5 |
ylist |
list of observed time series values for each subject |
xlist |
list of design matrices for each subject. |
timelist |
list of time indices |
prior_init |
a vector of initial values for prior probability for each state |
tpm_init |
a matrix of initial values for transition rate matrix |
emit_init |
a vector of initial values for the means for each poisson distribution |
zero_init |
a scalar initial value for the structural zero proportion |
yceil |
a scalar defining the ceiling of y, above which the values will be truncated. Default to NULL. |
rho |
tuning parameter in the distributed learning algorithm. Default to 1. |
priorclust |
a vector to specify the grouping for state prior. Default to NULL, which means no grouping. |
tpmclust |
a vector to specify the grouping for state transition rates. Default to NULL, which means no grouping. |
emitclust |
a vector to specify the grouping for the intercepts in Poisson regressions. Default to NULL, which means no grouping. |
zeroclust |
a vector to specify the grouping for the intercepts in ZIP regression. Default to NULL, which means no grouping. |
slopeclust |
a vector to specify the grouping for the slopes in Poisson and ZIP regressions. Default to NULL, which means no grouping. |
group |
a list containing group information. |
maxit |
maximum number iteration. Default to 100. |
tol |
tolerance in the terms of the relative change in the norm of the common coefficients. Default to 1e-4. |
ncores |
number of cores to be used for parallel programming. Default to 1. |
method |
method for the distributed optimization in the ADMM framework. |
print |
whether to print each iteration. Default to TRUE. |
libpath |
path for the ziphsmm library if not the default set up. Default to NULL. |
... |
Further arguments passed on to the optimization methods |
the maximum likelihood estimates of the zero-inflated hidden Markov model
Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J., 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1), pp.1-122.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | ## Not run:
set.seed(12933)
nsubj <- 20
ns <- 4000
ylist <- vector(mode="list",length=nsubj)
xlist <- vector(mode="list",length=nsubj)
timelist <- vector(mode="list",length=nsubj)
priorparm1 <- 0
priorparm2 <- 1
tpmparm1 <- c(-2,-2)
tpmparm2 <- c(0,0)
zeroparm <- c(-2,0)
emitparm <- c(4,0, 6,0)
zeroindex <- c(1,0)
for(n in 1:nsubj){
xlist[[n]] <- matrix(rep(c(0,1,0,1),rep(1000,4)),nrow=4000,ncol=1)
timeindex <- rep(1,4000)
for(i in 2:4000) timeindex[i] <- timeindex[i-1] + sample(1:4,1)
timelist[[n]] <- timeindex
if(n<=10){
workparm <- c(priorparm1,tpmparm1,zeroparm,emitparm)
}else{
workparm <- c(priorparm2,tpmparm2,zeroparm,emitparm)
}
result <- hmmsim2.cont(workparm,2,4000,zeroindex,emit_x=xlist[[n]],
zeroinfl_x=xlist[[n]],timeindex=timeindex)
ylist[[n]] <- result$series
}
prior_init=c(0.5,0.5)
tpm_init=matrix(c(-0.1,0.1,0.1,-0.1),2,2,byrow=TRUE)
zero_init=0.2
emit_init=c(50,400)
####
M <- 2
priorclust <- c(rep(1,10),rep(2,10))
tpmclust <- c(rep(1,10),rep(2,10))
zeroclust <- NULL
emitclust <- NULL
slopeclust <- rep(1,20)
group <- vector(mode="list",length=2)
group[[1]] <- 1:10; group[[2]] <- 11:20
###
time <- proc.time()
result <- dist_learn2(ylist, xlist, timelist, prior_init, tpm_init,
emit_init, zero_init, NULL, rho=1, priorclust,tpmclust,
emitclust,zeroclust,slopeclust,group,ncores=1,
maxit=10, tol=1e-4, method="CG",print=TRUE)
proc.time() - time
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.