Posterior sampling for Non-Local Priors

Share:

Description

Gibbs sampler for linear and Cox proportional hazards model under product non-local priors and Zellner's prior. Both sampling conditional on a model and Bayesian model averaging are implemented (see Details).

If x and y not specified samples from non-local priors/posteriors with density proportional to d(theta) N(theta; m, V) are produced, where d(theta) is the non-local penalty term.

Usage

1
2
rnlp(y, x, m, V, msfit, priorCoef, priorVar=igprior(alpha=0.01,lambda=0.01), niter=10^3,
burnin=round(niter/10), thinning=1, pp='norm')

Arguments

y

Vector with observed responses. When class(y)=='Surv' sampling is based on the Cox partial likelihood, else a linear model is assumed.

x

Design matrix with all potential predictors

m

Mean for the Normal kernel

V

Covariance for the Normal kernel

msfit

Object of class msfit, e.g. as returned by modelSelection. If left missing sampling under the full model y ~ x is performed, otherwise posterior model samples in msfit are used.

priorCoef

Prior distribution for the coefficients. Must be object of class msPriorSpec with slot priorType set to 'coefficients'. Possible values for slot priorDistr are 'pMOM', 'piMOM', 'peMOM' and 'zellner'

priorVar

Prior on residual variance. Must be object of class msPriorSpec with slot priorType set to 'nuisancePars'. Slot priorDistr must be equal to 'invgamma'

niter

Number of MCMC iterations

burnin

Number of burn-in MCMC iterations. Defaults to .1*niter. Set to 0 for no burn-in

thinning

MCMC thinning factor, i.e. only one out of each thinning iterations are reported. Defaults to no thinning

pp

When msfit is provided this is the method to compute posterior model probabilities, which determine the sampled models. Can be 'norm' or 'exact', see postProb for details.

Details

The algorithm is implemented for product MOM (pMOM), product iMOM (piMOM) and product eMOM (peMOM) priors. The algorithm combines an orthogonalization that provides low serial correlation with a latent truncation representation that allows fast sampling.

When y and x are specified sampling is for the linear regression posterior. When argument msfit is left missing, posterior sampling is for the full model regressing y on all covariates in x. When msfit is specified each model is drawn with probability given by postProb(msfit). In this case, a Bayesian Model Averaging estimate of the regression coefficients can be obtained by applying colMeans to the rnlp ouput matrix.

When y and x are left missing, sampling is from a density proportional to d(theta) N(theta; m,V), where d(theta) is the non-local penalty (e.g. d(theta)=prod(theta^(2r)) for the pMOM prior).

Value

Matrix with posterior samples

Author(s)

David Rossell

References

D. Rossell and D. Telesca. Non-local priors for high-dimensional estimation, 2014. http://arxiv.org/pdf/1402.5107v2.pdf

See Also

modelSelection to perform model selection and compute posterior model probabilities. For more details on prior specification see msPriorSpec-class.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#Generate data
set.seed(2)
n <- 10^3; tau <- 0.133; x <- rmvnorm(n,sigma=matrix(c(2,1,1,2),nrow=2))
thtrue <- c(0.5,1); phitrue <- 1
y <- thtrue[1]*x[,1] + thtrue[2]*x[,2] + rnorm(n,sd=sqrt(phitrue))

#Specify prior parameters
priorCoef <- imomprior(tau=1)
priorVar <- igprior(alpha=.01,lambda=.01)

th <- rnlp(y=y, x=x, niter=100, priorCoef=priorCoef, priorVar=priorVar)
colMeans(th)
acf(th[,1])[1]