ssgraph: Algorithm for graphical models using spike-and-slab priors

Description Usage Arguments Value Author(s) References See Also Examples

View source: R/ssgraph.R

Description

This function has a sampling algorithm for Bayesian model determination in undirected graphical models, based on spike-and-slab priors.

Usage

1
2
3
4
ssgraph( data, n = NULL, method = "ggm", not.cont = NULL, 
         iter = 5000, burnin = iter / 2, var1 = 4e-04, 
         var2 = 1, lambda = 1, g.prior = 0.5, g.start = "full", 
         sig.start = NULL, save = FALSE, print = 1000, cores = NULL )

Arguments

data

There are two options: (1) an (n \times p) matrix or a data.frame corresponding to the data, (2) an (p \times p) covariance matrix as S=X'X which X is the data matrix (n is the sample size and p is the number of variables). It also could be an object of class "sim", from function bdgraph.sim of R package BDgraph. The input matrix is automatically identified by checking the symmetry.

n

The number of observations. It is needed if the "data" is a covariance matrix.

method

A character with two options "ggm" (default) and "gcgm". Option "ggm" is for Gaussian graphical models based on Gaussianity assumption. Option "gcgm" is for Gaussian copula graphical models for the data that not follow Gaussianity assumption (e.g. continuous non-Gaussian, discrete, or mixed dataset).

not.cont

For the case method = "gcgm", a vector with binary values in which 1 indicates not continuous variables.

iter

The number of iteration for the sampling algorithm.

burnin

The number of burn-in iteration for the sampling algorithm.

var1

Value for the variance of the the prior of precision matrix for the places that there is no link in the graph.

var2

Value for the variance of the the prior of precision matrix for the places that there is link in the graph.

lambda

Value for the parameter of diagonal element of the prior of precision matrix.

g.prior

For determining the prior distribution of each edge in the graph. There are two options: a single value between 0 and 1 (e.g. 0.5 as a noninformative prior) or an (p \times p) matrix with elements between 0 and 1.

g.start

Corresponds to a starting point of the graph. It could be an (p \times p) matrix, "empty" (default), or "full". Option "empty" means the initial graph is an empty graph and "full" means a full graph. It also could be an object with S3 class "ssgraph" of package ssgraph or "bdgraph" of package BDgraph; this option can be used to run the sampling algorithm from the last objects of previous run (see examples).

sig.start

Corresponds to a starting point of the covariance matrix. It must be positive definite matrix.

save

Logical: if FALSE (default), the adjacency matrices are NOT saved. If TRUE, the adjacency matrices after burn-in are saved.

print

Value to see the number of iteration for the MCMC algorithm.

cores

The number of cores to use for parallel execution. The default is to use 2 CPU cores of the computer. The case cores="all" means all CPU cores to use for parallel execution.

Value

An object with S3 class "ssgraph" is returned:

p_links

An upper triangular matrix which corresponds the estimated posterior probabilities of all possible links.

K_hat

The posterior estimation of the precision matrix.

For the case "save = TRUE" is also returned:

sample_graphs

A vector of strings which includes the adjacency matrices of visited graphs after burn-in.

graph_weights

A vector which includes the counted numbers of visited graphs after burn-in.

all_graphs

A vector which includes the identity of the adjacency matrices for all iterations after burn-in. It is needed for monitoring the convergence of the MCMC sampling algorithm.

all_weights

A vector which includes the waiting times for all iterations after burn-in. It is needed for monitoring the convergence of the MCMC sampling algorithm.

Author(s)

Reza Mohammadi a.mohammadi@uva.nl

References

Wang, H. (2015). Scaling it up: Stochastic search structure learning in graphical models, Bayesian Analysis, 10(2):351-377

George, E. I. and McCulloch, R. E. (1993). Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88(423):881-889

Griffin, J. E. and Brown, P. J. (2010) Inference with normal-gamma prior distributions in regression problems. Bayesian Analysis, 5(1):171-188

Mohammadi, A. et al (2017). Bayesian modelling of Dupuytren disease by using Gaussian copula graphical models, Journal of the Royal Statistical Society: Series C, 66(3):629-645

Mohammadi, R. and Wit, E. C. (2019). BDgraph: An R Package for Bayesian Structure Learning in Graphical Models, Journal of Statistical Software, 89(3):1-30

Mohammadi, A. and Wit, E. C. (2015). Bayesian Structure Learning in Sparse Gaussian Graphical Models, Bayesian Analysis, 10(1):109-138

See Also

bdgraph, bdgraph.mpl, summary.ssgraph, compare

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# Generating multivariate normal data from a 'random' graph
data.sim <- bdgraph.sim( n = 80, p = 7, prob = 0.5, vis = TRUE )

# Running algorithm based on GGMs
ssgraph.obj <- ssgraph( data = data.sim, iter = 1000 )

summary( ssgraph.obj )

# To compare the result with true graph
compare( data.sim, ssgraph.obj, main = c( "Target", "ssgraph" ), vis = TRUE )

## Not run: 

# Running algorithm with starting points from previous run
ssgraph.obj2 <- ssgraph( data = data.sim, iter=5000, g.start = ssgraph.obj )
    
compare( data.sim, ssgraph.obj, ssgraph.obj2, vis = TRUE, 
         main = c( "Target", "Frist run", "Second run" ) )

## End(Not run)

ssgraph documentation built on July 8, 2020, 7:32 p.m.