sparsenetgls: The sparsenetgls() function

Description Usage Arguments Value Examples

View source: R/sparsenetgls.R

Description

The sparsenetgls functin is a combination of the graph structure learning and generalized least square regression. It is designed for multivariate regression uses penalized and/or regularised approach to deriving the precision and covariance Matrix of the multivariate Gaussian distributed responses. Gaussian Graphical model is used to learn the structure of the graph and construct the precision and covariance matrix. Generalized least squared regression is used to derive the sandwich estimation of variances-covariance matrix for the regression coefficients of the explanatory variables, conditional on the solutions of the precision and covariance matrix.

Usage

1
2
3
4
5
6
7
8
sparsenetgls(
  responsedata,
  predictdata,
  nlambda = 10,
  ndist = 5,
  method = c("lasso", "glasso", "elastic", "mb"),
  lambda.min.ratio = 1e-05
)

Arguments

responsedata

It is a data matrix of multivariate normal distributed response variables. Each row represents one observation sample and each column represents one variable.

predictdata

It is a data matrix of explanatory variables and has the same number of rows as the response data.

nlambda

It is an interger recording the number of lambda value used in the penalized path for estimating the precision matrix. The default value is 10.

ndist

It is an interger recording the number of distant value used in the penalized path for estimating the covariance matrix. The default value is 5.

method

It is the option parameter for selecting the penalized method to derive the precision matrix in the calculation of the sandwich estimator of regression coefficients and their variance-covariance matrix. The options are 'glasso', 'lasso','elastic', and 'mb'. 'glasso' use the graphical lasso method documented in Yuan and lin (2007) and Friedman, Hastie et al (2007). It used the imported function from R package 'huge'. 'lasso' use the penalized liner regression among the response variables (Y[,j]~Y[,1]+...Y[,j-1],Y[,j+1] +...Y[,p]) to estimate the precision matrix. 'elastic' uses the enet-regularized linear regression among the response variables to estimate the precision matrix. Both of these methods utilize the coordinate descending algorithm documentd in Friedman, J., Hastie, T. and Tibshirani, R. (2008) and use the imported function from R package 'glmnet'. 'mb' use the Meinshausen and Buhlmann penalized linear regression and the neighbourhood selection with the lasso approach (2006) to select the covariance terms and derive the corresponding precision matrix ; It uses the imported function from 'huge' in function sparsenetgls().

lambda.min.ratio

It is the default parameter set in function huge() in the package 'huge'. Quoted from huge(), it is the minial value of lambda, being a fraction of the uppperbound (MAX) of the regularization/thresholding parameter that makes all the estimates equal to 0. The default value is 0.001. It is only applicable when 'glasso' and 'mb' method is used.

Value

Return the list of regression results including the regression coefficients, array of variance-covariance matrix for different lambda and distance values, lambda and distance (power) values, bic and aic for model fitting, and the list of precision matrices on the penalized path.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
ndox=5; p=3; n=1000
VARknown <- rWishart(1, df=4, Sigma=matrix(c(1,0,0,0,1,0,0,0,1),
nrow=3,ncol=3))
normc <- mvrnorm(n=n,mu=rep(0,p),Sigma=VARknown[,,1])
Y0=normc
##u-beta
u <- rep(1,ndox)
X <- mvrnorm(n=n,mu=rep(0,ndox),Sigma=Diagonal(ndox,rep(1,ndox)))
X00 <- scale(X,center=TRUE,scale=TRUE)
X0 <- cbind(rep(1,n),X00)
#Add predictors of simulated CNA
abundance1 <- scale(Y0,center=TRUE,scale=TRUE)+as.vector(X00%*%as.matrix(u))

##sparsenetgls()
fitgls <- sparsenetgls(responsedata=abundance1,predictdata=X00,
nlambda=5,ndist=2,method='elastic')
nlambda=5
##rescale regression coefficients from sparsenetgls
#betagls <- matrix(nrow=nlambda, ncol=ndox+1)
#for (i in seq_len(nlambda))   
#betagls[i,] <- convertbeta(Y=abundance1, X=X00, q=ndox+1,
#beta0=fitgls$beta[,i])$betaconv

Example output

Loading required package: Matrix
Loading required package: MASS

sparsenetgls documentation built on Nov. 8, 2020, 7:37 p.m.