dist_learn: Distributed learning for a longitudinal continuous-time...

Description Usage Arguments Value References Examples

View source: R/dist_learn.R

Description

Distributed learning for a longitudinal continuous-time zero-inflated Poisson hidden Markov model, where zero-inflation only happens in State 1. Assume that priors, transition rates and state-dependent parameters can be subject-specific, clustered by group, or common. But at least one set of the parameters have to be common across all subjects.

Usage

1
2
3
4
dist_learn(ylist, timelist, prior_init, tpm_init, emit_init, zero_init,
  yceil = NULL, rho = 1, priorclust = NULL, tpmclust = NULL,
  emitclust = NULL, zeroclust = NULL, group, maxit = 100, tol = 1e-04,
  ncores = 1, method = "Nelder-Mead", print = TRUE, libpath = NULL, ...)

Arguments

ylist

list of observed time series values for each subject

timelist

list of time indices

prior_init

a vector of initial values for prior probability for each state

tpm_init

a matrix of initial values for transition rate matrix

emit_init

a vector of initial values for the means for each poisson distribution

zero_init

a scalar initial value for the structural zero proportion

yceil

a scalar defining the ceiling of y, above which the values will be truncated. Default to NULL.

rho

tuning parameters in the distributed learning algorithm. Default to 1.

priorclust

a vector to specify the grouping for state prior. Default to NULL, which means no grouping.

tpmclust

a vector to specify the grouping for state transition rates. Default to NULL, which means no grouping.

emitclust

a vector to specify the grouping for Poisson means. Default to NULL, which means no grouping.

zeroclust

a vector to specify the grouping for structural zero proportions. Default to NULL, which means no grouping.

group

a list containing group information.

maxit

maximum number iteration. Default to 100.

tol

tolerance in the terms of the relative change in the norm of the common coefficients. Default to 1e-4.

ncores

number of cores to be used for parallel programming. Default to 1.

method

method for the distributed optimization in the ADMM framework.

print

whether to print each iteration. Default to TRUE.

libpath

path for the ziphsmm library if not the default set up. Default to NULL.

...

Further arguments passed on to the optimization methods

Value

the maximum likelihood estimates of the zero-inflated hidden Markov model

References

Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J., 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1), pp.1-122.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
## Not run: 
set.seed(930518)
nsubj <- 10
ns <- 5040
ylist <- vector(mode="list",length=nsubj)
timelist <- vector(mode="list",length=nsubj)
prior1 <- c(0.5,0.2 ,0.3 )
omega1 <- matrix(c(-0.3,0.2,0.1,
                  0.1,-0.2,0.1,
                  0.15,0.2,-0.35),3,3,byrow=TRUE)
prior2 <- c(0.3,0.3 ,0.4 )
omega2 <- matrix(c(-0.5,0.25,0.25,
                   0.2,-0.4,0.2,
                   0.15,0.3,-0.45),3,3,byrow=TRUE)
emit <- c(50,200,600)
zero <- c(0.2,0,0)
for(n in 1:nsubj){
 timeindex <- rep(1,ns)
 for(i in 2:ns) timeindex[i] <- timeindex[i-1] + sample(1:4,1)
 timelist[[n]] <- timeindex
 if(n<=5){
   result <- hmmsim.cont(ns, 3, prior1, omega1, emit, zero, timeindex)
   ylist[[n]] <- result$series
 }else{
   result <- hmmsim.cont(ns, 3, prior2, omega2, emit, zero, timeindex)
   ylist[[n]] <- result$series
 }
}
prior_init <- c(0.5,0.2,0.3)
emit_init <- c(50, 225, 650)
zero_init <- 0.2
tpm_init <- matrix(c(-0.3,0.2,0.1,0.1,-0.2,0.1,0.15,0.2,-0.35),3,3,byrow=TRUE)
M <- 3
priorclust <- NULL
tpmclust <- c(1,1,1,1,1,2,2,2,2,2)
zeroclust <- rep(1,10)
emitclust <- rep(1,10)
group <- vector(mode="list",length=2)
group[[1]] <- 1:5; group[[2]] <- 6:10
result <- dist_learn(ylist, timelist, prior_init, tpm_init, 
                    emit_init, zero_init,NULL, rho=1,priorclust,tpmclust,
                    emitclust,zeroclust,group,ncores=1,
                    maxit=50, tol=1e-4, method="CG", print=TRUE)

## End(Not run)

ziphsmm documentation built on May 2, 2019, 6:10 a.m.

Related to dist_learn in ziphsmm...