MHmcmc: Metropolis-Hastings sampling from User-Written R function

Description Usage Arguments Details Value References See Also Examples

Description

This function allows user to contruct a sample from a user-defined continous distribution using a random walk Metropolis algorithm.

Usage

1
MHmcmc(logfun, burnin, mcmc, thin, tune, V, df, theta.init, verbose)

Arguments

logfun

The unnormalised log-density of the distribution from which to take a sample. This must be a function defined in R whose first argument is a continous (possibly vector) variable. This first argument is the point in the state space at which the (log)density is to be evaluated. See the Details section and the examples below for more information.

burnin

The number of burn-in iterations for the sampler

mcmc

The number of MCMC iterations after burnin

thin

The thinning interval used in the simulation. The number of MCMC iterations must be divisible by this value

tune

The tuning paramete.r for the Metropolis sampling. Can be either a positive scalar or a k-vector, where k is the length of theta

V

The scale matrix for the Student's t proposal distribution. Must be a square matrix with dimension equal to the length of theta.init

df

The degrees of freedom of the Student's t proposal distribution

theta.init

Starting values for the sampling. Must be of the appropriate dimension. It must also be the case that logfun(theta.init) gives finite values, e.g. -Inf are not allowed.

verbose

A switch which determines whether or not the progress of the sampler is printed to the screen. If verbose is greater than 0 the iteration number, and the Metropolis acceptance rate are sent to the screen every verboseth iteration

Details

MHmcmc produces a sample from a user-defined distribution using a random walk Metropolis algorithm with multivariate multivariate Student's t proposal distribution. See Gelman et al. (2003) and Robert & Casella (2004) for details of the random walk Metropolis algorithm.

The proposal distribution is centered at the current value of theta and has scale matrix H. H is calculated as: H = T*V*T, where T is a the diagonal positive definite matrix formed from the tune.

Value

An mcmc object that contains the posterior sample. This object can be summarized by functions provided by the coda package.

References

Chib S. & Greenberg E. (1995). Understanding the Metropolis-Hastings Algorithm. The American Statistician, 49, 327-335.

Gelman A., Carlin J. B., Stern H. S. & Rubin, D. B. (2003). Bayesian Data Analysis. 2nd Edition. Boca Raton: Chapman & Hall/CRC.

Plummer M., Best, N., Cowles K. & Vines K. (2006). CODA: Convergence Diagnosis and Output Analysis for MCMC. R News 6, 7–1.

Robert C. P. & Casella G. (2004). Monte Carlo Statistical Methods. 2nd Edition. New York: Springer.

See Also

Lubricant, nlpost_lub, nlpost_gomp

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
## Not run: 

data(Lubricant)
y <- Lubricant$viscos
x1 <- Lubricant$tempC
x2 <- Lubricant$pressure/1000
n <- length(y)
muBeta <-  rep(0, 9)
SigBeta <-  matrix(0, 9, 9)
diag(SigBeta) <-  50
sigScale <-  10

# re-define nlpost_lub, its gradient and Hessian matrix
ff <- function(x) nlpost_lub(beta = x[1:9], lsig = x[10],
                             y = y, x1 = x1, x2 = x2, n = n,
                             muBeta = muBeta, SigBeta = SigBeta,
                             sigScale = sigScale)
ff.gr <- function(x) grad_lub(beta = x[1:9], lsig = x[10], y = y,
                              x1 = x1, x2 = x2, n = n, muBeta = muBeta,
                              SigBeta = SigBeta, sigScale = sigScale)
ff.h <- function(x) hess_lub(beta = x[1:9], lsig = x[10], y = y, x1 = x1,
                             x2 = x2, n = n, muBeta = muBeta,
                             SigBeta = SigBeta, sigScale = sigScale)
# find minimum and the Hessian at the min value
init <-  c(10.522, 20.6, 1.464, -0.258, 0.022, 0.394, 0.036,
           46.566, -0.455, -3.195)
opt.post <-  nlminb(init, obj=ff, gradient = ff.gr,
                    hessian = ff.h, control = list(trace = 1))
opt.post$hessian = ff.h(opt.post$par)
# the mcmc sample
mcmc.post = MHmcmc(logfun = function(x) -ff(x),
               burnin = 40000, mcmc = 1e+5, thin = 1, tune = 0.5,
               V = solve(opt.post$hessian), df = dfprop,
               theta.init = init, verbose = 10000)
# the marginal posteriors
labl <- c("beta1", "beta2", "beta3", "beta4", "beta5",
          "beta6", "beta7", "beta8", "beta9", "log sgima")
panel.hist <- function(x, ...)
{
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col = "lightblue", ...)
}

Lab.palette1 <- colorRampPalette(c("lightblue", "yellow",
                                    "orange", "red"),
                                     space = "Lab")

pdf("pairPost.pdf", width=10, height=10)
par(mai=c(0.08,0.08,0.08,0.08))
pairs(as.matrix(mcmc.post[,1:10]),
      panel = function(...)
                smoothScatter(...,
                              nrpoints = 50,
                              colramp = Lab.palette1,
                              add = TRUE
                              ),
      diag.panel = panel.hist,
      labels = labl)
dev.off()

## End(Not run)

erlisR/iLaplaceExamples documentation built on May 16, 2019, 8:48 a.m.