| SSLASSO | R Documentation |
Spike-and-Slab LASSO is a spike-and-slab refinement of the LASSO procedure, using a mixture of Laplace priors indexed by 'lambda0' (spike) and 'lambda1' (slab).
The 'SSLASSO' procedure fits coefficients paths for Spike-and-Slab LASSO-penalized linear regression models over a grid of values for the regularization parameter 'lambda0'. The code has been adapted from the 'ncvreg' package (Breheny and Huang, 2011).
SSLASSO(
X,
y,
penalty = c("adaptive", "separable"),
variance = c("fixed", "unknown"),
lambda1,
lambda0,
beta.init = numeric(ncol(X)),
nlambda = 100,
theta = 0.5,
sigma = 1,
a = 1,
b,
eps = 0.001,
max.iter = 500,
counter = 10,
warn = FALSE
)
X |
The design matrix (n x p), without an intercept. 'SSLASSO' standardizes the data by default. |
y |
Vector of continuous responses (n x 1). The responses will be centered by default. |
penalty |
The penalty to be applied to the model. Either "separable" (with a fixed 'theta') or "adaptive" (with a random 'theta', where 'theta ~ B(a,p)'). |
variance |
Specifies whether the error variance is "fixed" or "unknown". |
lambda1 |
The slab penalty parameter. Must be smaller than the smallest 'lambda0'. |
lambda0 |
A sequence of spike penalty parameters. Must be monotone increasing. If not specified, a default sequence is generated. |
beta.init |
Initial values for the coefficients. Defaults to a vector of zeros. |
nlambda |
The number of 'lambda0' values to use in the default sequence. Defaults to 100. |
theta |
The initial mixing proportion for the spike component. Defaults to 0.5. |
sigma |
The initial value for the error standard deviation. Defaults to 1. |
a |
Hyperparameter for the Beta prior on theta. Defaults to 1. |
b |
Hyperparameter for the Beta prior on theta. Defaults to the number of predictors, p. |
eps |
Convergence tolerance. The algorithm stops when the maximum change in coefficients is less than 'eps'. Defaults to 0.001. |
max.iter |
The maximum number of iterations. Defaults to 500. |
counter |
The number of iterations between updates of the adaptive penalty parameters. Defaults to 10. |
warn |
A logical value indicating whether to issue a warning if the algorithm fails to converge. Defaults to 'FALSE'. |
An object with S3 class "SSLASSO". The object contains:
beta |
A p x L matrix of estimated coefficients, where L is the number of regularization parameter values. |
intercept |
A vector of length L containing the intercept terms. |
iter |
The number of iterations for each value of 'lambda0'. |
lambda0 |
The sequence of 'lambda0' values used. |
lambda1 |
The 'lambda1' value used. |
penalty |
The penalty type used. |
thetas |
A vector of length L containing the hyper-parameter values 'theta' (the same as 'theta' for "separable" penalty). |
sigmas |
A vector of length L containing the values 'sigma' (the same as the initial 'sigma' for "known" variance). |
select |
A (p x L) binary matrix indicating which variables were selected along the solution path. |
model |
A single model chosen after the stabilization of the regularization path. |
n |
The number of observations. |
Veronika Rockova <Veronika.Rockova@chicagobooth.edu>, Gemma Moran <gm845@stat.rutgers.edu>
Ročková, V., & George, E. I. (2018). The spike-and-slab lasso. Journal of the American Statistical Association, 113(521), 431-444.
Moran, G. E., Ročková, V., & George, E. I. (2019). Variance prior forms for high-dimensional bayesian variable selection. Bayesian Analysis, 14(4), 1091-1119.
[plot.SSLASSO()]
## Linear regression, where p > n
library(SSLASSO)
p <- 100
n <- 50
X <- matrix(rnorm(n*p), nrow = n, ncol = p)
beta <- c(1, 2, 3, rep(0, p-3))
y = X[,1] * beta[1] + X[,2] * beta[2] + X[,3] * beta[3] + rnorm(n)
# Oracle SSLASSO with known variance
result1 <- SSLASSO(X, y, penalty = "separable", theta = 3/p)
plot(result1)
# Adaptive SSLASSO with known variance
result2 <- SSLASSO(X, y)
plot(result2)
# Adaptive SSLASSO with unknown variance
result3 <- SSLASSO(X, y, variance = "unknown")
plot(result3)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.