OptimMCL: Iterative procedure for maximising the Monte Carlo likelihood

Description Usage Arguments Details Value Author(s) References See Also Examples

View source: R/OptimMCL.R

Description

The function uses an iterative procedure of directly maximising the Monte Carlo likelihood and the updating step size is limited by defining an experimental region using the estimated Monte Carlo variance.

Usage

1
OptimMCL(data, psi0, family, control = list(), mc.control = list())

Arguments

data

the data set to be estimated; for direct CAR model it is a list objects same used in loglik.dCAR and for glm with CAR latent variables, it is the same as the output of CAR.simGLM.

psi0

the initial value for the importance sampler parameter

family

a character take values in "gauss" (for direct CAR models), "binom" and "poisson" (glm Binomial/Poisson with CAR latent variable).

control

a list that controls the iterative procedures, see more in Details.

mc.control

a list that controls the MCMC algorithm, see more in Details of postZ

Details

The iterative procedure starts by using samples from the importance sampler with the parameter values given by psi0 and directly optimise the Monte Carlo likelihood. Then the importance sampler parameter value is updated by using either the MLE found by current step if it is within the experimental region (to be defined in the following), or a weighted average between the current psi value and the current MLE value, with weight to be specified by psi.ab in the list of control argument.

The experimental region is defined to be the region where the estimated variance of the Monte Carlo likelihood is smaller than a/√(n.s) where n.s is the number of Monte Carlo samples and a and is some fixed constant that can be specified through psi.ab in control.

The procedure iterate until convergence which is defined to be that the Monte Carlo likelihood is smaller than min(1, 2*sqrt(mcl.var)) where mcl.var is the variance of the estimated variance of the Monte Carlo likelihood evaluated at the MC-MLE. At this point, the procedure use a increased number of Monte Carlo samples and do the same iteration until converges again. The MC-MLE found on this second convergence is given as the final result. The Monte Carlo sample size is increased by a multiple of user specified number through the control argument.

The control argument is a list containing the following elements:

n.iter,

maximum number of iterations; default value is 10 for direct CAR models and 20 for glm with latent CAR variables

mc.samples,

(for direct CAR model only) an array of two with the first element being the number of Monte Carlo samples and the second being the increasing multiple of the sample size after the first convergence; default is c(500, 2)

s.increase,

(for glm with latent CAR variables only) an numeric value for the increasing multiple of the sample size after the first convergence; default value is 2.

rho.range,

the valid range for the parameter rho in the direct CAR model; default value is c(-0.249, 0.249)

psi.range,

the valid range for the parameters rho and sigma in the glm with latent CAR variables; default is c(-0.249, 0.249, 0.1, 10)

psi.ab,

an array with two entries controlling the experimental region and updating rules; the first entry controls the experimental region as a defined above and the second entry controls the updating rule as defined above; default is c(1, 0.5)

mc.var,

if true, the estimated covariance matrix for the MC-MLE is returned in every iteration; default is FALSE

trace.all,

if true, each iteration is stored and returned in the final output, default is TRUE

verbose,

if true, an summary message is printed after each iteration, default is TRUE

Value

When trace.all is TRUE, the function returns a list containing the following objects:

Psi,

a list of importance sampler parameter values from all iterations;

MC.datas,

a list object generated by mcl.prep.dCAR or mcl.prep.glm from all iterations;

MC.MLEs,

a list of the MC-MLE found by all iterations;

MC.Hess,

a list of the Hessian matrix at the MC-MLE from all iterations;

MC.Vars,

a list of the estimated covariance matrix of the MC-MLE from all iterations;

data,

the data object supplied to the function;

N.iter,

the total number of iterations;

total.time,

the total time elapsed;

convergence,

a logical value indicating whether the procedure converged or not;

mcsamples,

an array of two entries for the initial Monte Carlo sample size and the increased Monte Carlo sample size after the first convergence.

When trace.all is FALSE, the function returns the same list with each object in the list containing the result of the final iteration only.

Author(s)

Zhe Sha zhesha1006@gmail.com

References

Sha, Z. 2016 Estimating conditional auto-regression models, DPhil Thesis, Oxford.

See Also

summary.OptimMCL, rsmMCL

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
## Take long time to run
## Simulate some data to work with
set.seed(33)
n.torus <- 10
nb <- 30
rho <- 0.2
sigma <- 1.5
beta <- c(1, 1)
pars.true <- c(rho, sigma, beta)
X0 <- cbind(rep(1, n.torus^2), sample(log(1:n.torus^2)/5))
mydata2 <- CAR.simGLM(method = "binom", n = c(n.torus, n.torus), pars = pars.true,
                      Xs = as.matrix(X0), n.trial = nb)

## use a glm to find initial values for the importance sampler
data.glm <- data.frame(y=mydata2$y, mydata2$covX[,-1])
fit.glm <- glm(cbind(y, nb-mydata2$y) ~ .,data = data.glm, family=binomial)
library(spatialreg)
library(spdep)
logitp <- log((mydata2$y+0.5)/(mydata2$n.trial - mydata2$y + 0.5))
data.splm <- data.frame(y=logitp, mydata2$covX[,-1])
listW <- mat2listw(mydata2$W)
fit.splm <- spautolm(y~., data = data.splm, listw=listW, family = "CAR")
pars1 <- c(fit.splm$lambda, fit.splm$fit$s2, coef(fit.glm))

## Use the iterative procedure to find the MC-MLE
## !!!NOTE: the example below is only an illustration of usage
## users should increase the number of iterations and MCMC samples
## to get convergence results.
iter.mcmle <- OptimMCL(data = mydata2, psi0 = pars1, family = "binom",
                       control = list(n.iter = 1, mc.var = TRUE),
                       mc.control = list(N.Zy = 1e3, Scale = 1.65/(n.torus^(2/6)), thin = 5,
                   burns = 5e2, method = "mala", scale.fixed = TRUE))
summary(iter.mcmle, family = "binom", mc.covar=TRUE)

mclcar documentation built on Jan. 8, 2022, 5:07 p.m.