buildMCEM  R Documentation 
Takes a NIMBLE model and builds an MCEM algorithm for it. The user must specify which latent nodes are to be integrated out in the EStep.
All other stochastic nondata nodes will be maximized over. If the nodes do not have positive density on the entire real line, then box constraints can be used
to enforce this.
The Mstep is done by a nimble MCMC sampler. The Estep is done by a call to R's optim
with method = 'LBFGSB'
if the nodes are constrainted, or method = 'BFGS'
if the nodes are unconstrained.
buildMCEM(
model,
latentNodes,
burnIn = 500,
mcmcControl = list(adaptInterval = 100),
boxConstraints = list(),
buffer = 10^6,
alpha = 0.25,
beta = 0.25,
gamma = 0.05,
C = 0.001,
numReps = 300,
forceNoConstraints = FALSE,
verbose = TRUE
)
model 
a nimble model 
latentNodes 
character vector of the names of the stochastic nodes to integrated out. Names can be expanded, but don't need to be. For example, if the model contains

burnIn 
burnin used for MCMC sampler in E step 
mcmcControl 
list passed to 
boxConstraints 
list of box constraints for the nodes that will be maximized over. Each constraint is a list in which the first element is a character vector of node names to which the constraint applies and the second element is a vector giving the lower and upper limits. Limits of 
buffer 
A buffer amount for extending the boxConstraints. Many functions with boundary constraints will produce 
alpha 
probability of a type one error  here, the probability of accepting a parameter estimate that does not increase the likelihood. Default is 0.25. 
beta 
probability of a type two error  here, the probability of rejecting a parameter estimate that does increase the likelihood. Default is 0.25. 
gamma 
probability of deciding that the algorithm has converged, that is, that the difference between two Q functions is less than C, when in fact it has not. Default is 0.05. 
C 
determines when the algorithm has converged  when C falls above a (1gamma) confidence interval around the difference in Q functions from time point t1 to time point t, we say the algorithm has converged. Default is 0.001. 
numReps 
number of bootstrap samples to use for asymptotic variance calculation. 
forceNoConstraints 
avoid any constraints even from parameter bounds implicit in the model structure (e.g., from dunif or dgamma distributions); setting this to TRUE might allow MCEM to run when the bounds of a parameter being maximized over depend on another parameter. 
verbose 
logical indicating whether to print additional logging information. 
buildMCEM
calls the NIMBLE compiler to create the MCMC and objective function as nimbleFunctions. If the given model has already been used in compiling other nimbleFunctions, it is possible you will need to create a new copy of the model for buildMCEM to use.
Uses an ascentbased MCEM algorithm, which includes rules for automatically increasing the number of MC samples as iterations increase, and for determining when convergence has been reached. Constraints for parameter values can be provided. If constraints are not provided, they will be automatically determined by NIMBLE. Initial values for the parameters are taken to be the values in the model at the time buildMCEM
is called, unless the values in the compiled model are changed before running the MCEM.
an R list with two elements:
run
A function that runs the MCEM algorithm. This function takes the arguments listed in run
Arguments below.
estimateCov
An function that estimates the asymptotic covariance of the parameters. The covariance is estimated using the method of Louis (1982).
This function takes the arguments listed in estimateCov
Arguments below.
run
ArgumentsinitM
starting number of iterations for the algorithm.
estimateCov
ArgumentsMLEs
named vector of MLE values. Must have a named MLE value for each stochastic, nondata, nonlatent node. If the run()
method has alread been called,
MLEs do not need to be provided.
useExistingSamples
logical argument. If TRUE
and the run()
method has previously been called, the covariance estimation will use MCMC samples from the last step of the MCEM algorithm.
Otherwise, an MCMC algorithm will be run for 10,000 iterations, and those samples will be used. Defaults to FALSE
.
Clifford AndersonBergman and Nicholas Michaud
Caffo, Brian S., Wolfgang Jank, and Galin L. Jones (2005). Ascentbased Monte Carlo expectationmaximization. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 235251.
Louis, Thomas A (1982). Finding the Observed Information Matrix When Using the EM Algorithm. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 44(2), 226233.
## Not run:
pumpCode < nimbleCode({
for (i in 1:N){
theta[i] ~ dgamma(alpha,beta);
lambda[i] < theta[i]*t[i];
x[i] ~ dpois(lambda[i])
}
alpha ~ dexp(1.0);
beta ~ dgamma(0.1,1.0);
})
pumpConsts < list(N = 10,
t = c(94.3, 15.7, 62.9, 126, 5.24,
31.4, 1.05, 1.05, 2.1, 10.5))
pumpData < list(x = c(5, 1, 5, 14, 3, 19, 1, 1, 4, 22))
pumpInits < list(alpha = 1, beta = 1,
theta = rep(0.1, pumpConsts$N))
pumpModel < nimbleModel(code = pumpCode, name = 'pump', constants = pumpConsts,
data = pumpData, inits = pumpInits)
# Want to maximize alpha and beta (both which must be positive) and integrate over theta
box = list( list(c('alpha','beta'), c(0, Inf)))
pumpMCEM < buildMCEM(model = pumpModel, latentNodes = 'theta[1:10]',
boxConstraints = box)
MLEs < pumpMCEM$run(initM = 1000)
cov < pumpMCEM$estimateCov()
## End(Not run)
# Could also use latentNodes = 'theta' and buildMCEM() would figure out this means 'theta[1:10]'
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.