| modelSelectionGGM | R Documentation |
Bayesian model selection and averaging for Gaussian graphical models.
modelSelectionGGM performs standard BMS/BMA under the specified priors
modelSelectionGGM_eBayes uses empirical Bayes to set edge prior inclusion probabilities pi. By default a common prior inclusion probability is set on all edges. If Z is specified, the edge inclusion probabilities can be regressed on meta-covariates Z via log(pi/(1-pi))= Z w.
modelSelectionGGM(y, priorCoef=normalidprior(tau=1),
priorModel=modelbinomprior(1/ncol(y)),
priorDiag=exponentialprior(lambda=1), center=TRUE, scale=TRUE,
global_proposal= 'none', prob_global=0.5,
tempering= 0.5, truncratio= 100,
save_proposal= FALSE, niter=10^3, burnin= round(niter/10),
updates_per_iter= ceiling(sqrt(ncol(y))), updates_per_column= 10,
sampler='Gibbs', pbirth=0.75, pdeath=0.5*(1-pbirth),
bounds_LIT, Omegaini='glasso-ebic', verbose=TRUE)
modelSelectionGGM_eBayes(Z, wini, niter.mcmc= 5000,
niter.mstep= 1000, niter.eBayes= 20, priorvar.w,
verbose=TRUE, ...)
y |
Data matrix |
priorCoef |
Prior on off-diagonal entries of the precision matrix, conditional on their not being zero (slab). See details |
priorModel |
Prior probabilities on having non-zero diagonal
entries. Possible priors are modelbinomprior() and modelbbprior().
See details.
See also |
priorDiag |
Prior on diagonal entries of the precision matrix. See details. |
center |
If |
scale |
If |
global_proposal |
Either |
prob_global |
Probability of proposing a sample from a
global proposal, see details. This argument is ignored if
|
tempering |
If |
truncratio |
In the global proposal, any model's proposal probability
is >= prob(top model) / truncratio. This ensures bounded weight ratios
in the MH step, to improve poor mixing when the current state has low
proposal probability, often at the cost of decreasing the acceptance rate.
If |
save_proposal |
If |
sampler |
Posterior sampler used when |
niter |
Number of posterior samples to be obtained. Each iteration
consists of selecting a column of the precision matrix at random and
making |
burnin |
The first burnin samples will be discarded |
updates_per_iter |
An iteration consists of selecting
|
updates_per_column |
See |
pbirth |
Probability of a birth move. The probability of a swap move
is |
pdeath |
Probability of a death move. Ignored unless
|
bounds_LIT |
log-proposal density bound for the locally-informed LIT
algorithm of Zhou et al. These bound the proposal density for a death move
and for a birth move. By default, |
Omegaini |
Initial value of the precision matrix Omega. "null"
sets all off-diagonal entries to 0. "glasso-bic" and "glasso-ebic" use
GLASSO with regularization parameter set via BIC/EBIC,
respectively. Alternatively, |
verbose |
Set |
Z |
p x q matrix containing the q meta-covariates for the p
covariates. An intercept is automatically added (including when
|
wini |
Optional. Initial value for the q-dimensional hyper-parameter w |
niter.mcmc |
Number of iterations in the final MCMC, run once after hyper-parameter estimates have been obtained |
niter.mstep |
Number of MCMC iterations in each M-step required to update hyper-parameter estimates |
niter.eBayes |
Max number of iterations in the empirical Bayes optimization algorithm. The algorithm also stop when the objective function didn't improve in >=2 iterations |
priorvar.w |
Hyper-parameters w follow a prior w ~ N(0, priorvar.w (Z^T Z/p)^(-1)) where priorvar.w is the prior variance. By default is set such that all prior inclusion probabilities lie in (0.001,0.999) with 0.95 prior probability |
... |
Further parameters passed onto |
Let Omega be the inverse covariance matrix. A spike-and-slab prior is used. Specifically, independent priors are set on all Omega[j,k], and then a positive-definiteness truncation is added.
The prior on diagonal entries Omega[j,j] is given by priorDiag.
Specificaly, if priorDiag = exponentialprior(lambda) then
the prior is Omega[j,j] ~ Exp(lambda).
Off-diagonal Omega[j,k] are equal to zero with probability given by
modelPrior and otherwise
arises from the distribution indicated in priorCoef (slab).
If priorCoef == normalidprior(tau) then Omega[j,k] ~ N(0, var=tau).
If modelPrior == modelbinomprior(p) then Omega[j,k] = 0 with probability p independently across j < k. If modelPrior == modelbbprior(a,b) then p ~ Beta(a,b).
Inference is based on MCMC posterior sampling. All sampling algorithms proceed by updating Omega[,k] given y and Omega[,-k] (of course, Omega[k,] is also set to Omega[,k]). Omega[,k] is updated by first updating the set of non-zero entries (i.e. edges in the graphical model) using either Gibbs sampling or a proposal distribution (see below), and then the non-zero entries of Omega[,k] are updated from their exact posterior given the current set of edges.
An MCMC iteration consists of iterating over updates_per_iter columns
chosen at random and, for each column, do updates_per_column
proposals.
If global_proposal == "none", a local MCMC proposal is used to
update what entries in Omega[,k] are zero.
Local means that the proposed model is a neighbour of the current model,
e.g. only one edge is added / killed.
Available local kernels are Gibbs, birth-death-swap and LIT (Zhou et al 2022).
If global_proposal == "regression", a global proposal is used.
A new model (set of non-zero entries in Omega[,k]) is proposed based
on the posterior distribution of a linear regression of y[,k] on the other
covariates.
prob_global indicates the probability of using the global proposal,
otherwise a local proposal is used.
Proposal probabilities are tempered by raising them to the power
tempering. Further, any model with probability
below prob(top model) / truncratio is assigned proposal probability
prob(top model) / truncratio.
Posterior inference on the inverse covariance of y.
Object of class msfit_ggm, which extends a list with elements
postSample |
Posterior samples for the upper-diagonal entries of the precision matrix. Stored as a sparse matrix, see package Matrix to utilities to work with such matrices |
prop_accept |
If |
proposal |
If |
proposaldensity |
log-proposal density for the samples in
|
margpp |
Rao-Blackwellized estimates of posterior marginal inclusion probabilities. Only valid when using the Gibbs algorithm |
priors |
List storing the priors specified when calling
|
p |
Number of columns in |
indexes |
Indicates what row/column of Omega is stored in each
column of |
samplerPars |
MCMC sampling parameters |
almost_parallel |
Stores the input argument |
David Rossell
Zhou, Quan, et al. Dimension-free mixing for high-dimensional Bayesian variable selection. JRSS-B 2022, 84, 1751-1784
msfit_ggm-class for further details on the output.
icov for the estimated precision (inverse covariance) matrix.
coef.msfit_ggm for Bayesian model averaging estimates and
intervals.
#Simulate data with p=3
Th= diag(3); Th[1,2]= Th[2,1]= 0.5
sigma= solve(Th)
z= matrix(rnorm(1000*3), ncol=3)
y= z
#Obtain posterior samples
#y has few columns, initialize the MCMC at the sample precision matrix
#Else, leave the argument Omegaini in modelSelectionGGM empty
Omegaini= solve(cov(y))
fit= modelSelectionGGM(y, scale=FALSE, Omegaini=Omegaini)
#Parameter estimates, intervals, prob of non-zero
coef(fit)
#Estimated inverse covariance
icov(fit)
#Estimated inverse covariance, entries set to 0
icov(fit, threshold=0.95)
#Shows first posterior samples
head(fit$postSample)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.