grace: Graph-Constrained Estimation

Description Usage Arguments Details Value Author(s) References Examples

Description

Calculate coefficient estimates of Grace based on methods described in Li and Li (2008).

Usage

1
  grace(Y, X, L, lambda.L, lambda.1 = 0, lambda.2 = 0, normalize.L = FALSE, K = 10, verbose = FALSE)

Arguments

Y

outcome vector.

X

matrix of predictors.

L

penalty weight matrix L.

lambda.L

tuning parameter value for the penalty induced by the L matrix (see details). If a sequence of lambda.L values is supplied, K-fold cross-validation is performed.

lambda.1

tuning parameter value for the lasso penalty (see details). If a sequence of lambda.1 values is supplied, K-fold cross-validation is performed.

lambda.2

tuning parameter value for the ridge penalty (see details). If a sequence of lambda.2 values is supplied, K-fold cross-validation is performed.

normalize.L

whether the penalty weight matrix L should be normalized.

K

number of folds in cross-validation.

verbose

whether computation progress should be printed.

Details

The Grace estimator is defined as

(\hatα, \hatβ) = \arg\min_{α, β}{\|Y-α 1 -Xβ\|_2^2+lambda.L(β^T Lβ)+lambda.1\|β\|_1+lambda.2\|β\|_2^2}

In the formulation, L is the penalty weight matrix. Tuning parameters lambda.L, lambda.1 and lambda.2 may be chosen by cross-validation. In practice, X and Y are standardized and centered, respectively, before estimating \hatβ. The resulting estimate is then rescaled back into the original scale. Note that the intercept \hatα is not penalized.

The Grace estimator could be considered as a generalized elastic net estimator (Zou and Hastie, 2005). It penalizes the regression coefficient towards the space spanned by eigenvectors of L with the smallest eigenvalues. Therefore, if L is informative in the sense that is small, then the Grace estimator could be less biased than the elastic net.

Value

An R ‘list’ with elements:

intercept

fitted intercept.

beta

fitted regression coefficients.

Author(s)

Sen Zhao

References

Zou, H., and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 67, 301-320.

Li, C., and Li, H. (2008). Network-constrained regularization and variable selection for analysis of genomic data. Bioinformatics, 24, 1175-1182.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
set.seed(120)
n <- 100
p <- 200

L <- matrix(0, nrow = p, ncol = p)
for(i in 1:10){
	L[((i - 1) * p / 10 + 1), ((i - 1) * p / 10 + 1):(i * (p / 10))] <- -1
}
diag(L) <- 0
ind <- lower.tri(L, diag = FALSE)
L[ind] <- t(L)[ind]
diag(L) <- -rowSums(L)

beta <- c(rep(1, 10), rep(0, p - 10))

Sigma <- solve(L + 0.1 * diag(p))
sigma.error <- sqrt(t(beta) %*% Sigma %*% beta) / 2

X <- mvrnorm(n, mu = rep(0, p), Sigma = Sigma)
Y <- c(X %*% beta + rnorm(n, sd = sigma.error))

grace(Y, X, L, lambda.L = c(0.08, 0.10, 0.12), lambda.2 = c(0.08, 0.10, 0.12))

sen-zhao/Grace documentation built on May 29, 2019, 5:56 p.m.