conquer.reg | R Documentation |
Fit sparse quantile regression models in high dimensions via regularized conquer methods with "lasso", "elastic-net", "group lasso", "sparse group lasso", "scad" and "mcp" penalties. For "scad" and "mcp", the iteratively reweighted \ell_1-penalized algorithm is complemented with a local adpative majorize-minimize algorithm.
conquer.reg( X, Y, lambda = 0.2, tau = 0.5, kernel = c("Gaussian", "logistic", "uniform", "parabolic", "triangular"), h = 0, penalty = c("lasso", "elastic", "group", "sparse-group", "scad", "mcp"), para.elastic = 0.5, group = NULL, weights = NULL, para.scad = 3.7, para.mcp = 3, epsilon = 0.001, iteMax = 500, phi0 = 0.01, gamma = 1.2, iteTight = 3 )
X |
An n by p design matrix. Each row is a vector of observations with p covariates. |
Y |
An n-dimensional response vector. |
lambda |
(optional) Regularization parameter. Can be a scalar or a sequence. If the input is a sequence, the function will sort it in ascending order, and run the regression accordingly. Default is 0.2. |
tau |
(optional) Quantile level (between 0 and 1). Default is 0.5. |
kernel |
(optional) A character string specifying the choice of kernel function. Default is "Gaussian". Choices are "Gaussian", "logistic", "uniform", "parabolic" and "triangular". |
h |
(optional) Bandwidth/smoothing parameter. Default is \max\{0.5 * (log(p) / n)^{0.25}, 0.05\}. The default will be used if the input value is less than or equal to 0. |
penalty |
(optional) A character string specifying the penalty. Default is "lasso" (Tibshirani, 1996). The other options are "elastic" for elastic-net (Zou and Hastie, 2005), "group" for group lasso (Yuan and Lin, 2006), "sparse-group" for sparse group lasso (Simon et al., 2013), "scad" (Fan and Li, 2001) and "mcp" (Zhang, 2010). |
para.elastic |
(optional) The mixing parameter between 0 and 1 (usually noted as α) for elastic-net. The penalty is defined as α ||β||_1 + (1 - α) ||β||_2^2. Default is 0.5.
Setting |
group |
(optional) A p-dimensional vector specifying group indices. Only specify it if |
weights |
(optional) A vector specifying groups weights for group Lasso and sparse group Lasso. The length must be equal to the number of groups. If not specified, the default weights are square roots of group sizes.
For example , if |
para.scad |
(optional) The constant parameter for "scad". Default value is 3.7. Only specify it if |
para.mcp |
(optional) The constant parameter for "mcp". Default value is 3. Only specify it if |
epsilon |
(optional) A tolerance level for the stopping rule. The iteration will stop when the maximum magnitude of the change of coefficient updates is less than |
iteMax |
(optional) Maximum number of iterations. Default is 500. |
phi0 |
(optional) The initial quadratic coefficient parameter in the local adaptive majorize-minimize algorithm. Default is 0.01. |
gamma |
(optional) The adaptive search parameter (greater than 1) in the local adaptive majorize-minimize algorithm. Default is 1.2. |
iteTight |
(optional) Maximum number of tightening iterations in the iteratively reweighted \ell_1-penalized algorithm. Only specify it if the penalty is scad or mcp. Default is 3. |
An object containing the following items will be returned:
coeff
If the input lambda
is a scalar, then coeff
returns a (p + 1) vector of estimated coefficients, including the intercept. If the input lambda
is a sequence, then coeff
returns a (p + 1) by nlambda matrix, where nlambda refers to the length of lambda
sequence.
bandwidth
Bandwidth value.
tau
Quantile level.
kernel
Kernel function.
penalty
Penalty type.
lambda
Regularization parameter(s).
n
Sample size.
p
Number of the covariates.
Belloni, A. and Chernozhukov, V. (2011). \ell_1 penalized quantile regression in high-dimensional sparse models. Ann. Statist., 39, 82-130.
Fan, J. and Li, R. (2001). Variable selection via nonconcave regularized likelihood and its oracle properties. J. Amer. Statist. Assoc., 96, 1348-1360.
Fan, J., Liu, H., Sun, Q. and Zhang, T. (2018). I-LAMM for sparse learning: Simultaneous control of algorithmic complexity and statistical error. Ann. Statist., 46, 814-841.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33-50.
Simon, N., Friedman, J., Hastie, T. and Tibshirani, R. (2013). A sparse-group lasso. J. Comp. Graph. Statist., 22, 231-245.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Statist. Soc. Ser. B, 58, 267–288.
Tan, K. M., Wang, L. and Zhou, W.-X. (2022). High-dimensional quantile regression: convolution smoothing and concave regularization. J. Roy. Statist. Soc. Ser. B, 84, 205-233.
Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped variables., J. Roy. Statist. Soc. Ser. B, 68, 49-67.
Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist., 38, 894-942.
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Statist. Soc. Ser. B, 67, 301-320.
See conquer.cv.reg
for regularized quantile regression with cross-validation.
n = 200; p = 500; s = 10 beta = c(rep(1.5, s), rep(0, p - s)) X = matrix(rnorm(n * p), n, p) Y = X %*% beta + rt(n, 2) ## Regularized conquer with lasso penalty at tau = 0.7 fit.lasso = conquer.reg(X, Y, lambda = 0.05, tau = 0.7, penalty = "lasso") beta.lasso = fit.lasso$coeff ## Regularized conquer with elastic-net penalty at tau = 0.7 fit.elastic = conquer.reg(X, Y, lambda = 0.1, tau = 0.7, penalty = "elastic", para.elastic = 0.7) beta.elastic = fit.elastic$coeff ## Regularized conquer with scad penalty at tau = 0.7 fit.scad = conquer.reg(X, Y, lambda = 0.13, tau = 0.7, penalty = "scad") beta.scad = fit.scad$coeff ## Regularized conquer with group lasso at tau = 0.7 beta = c(rep(1.3, 5), rep(1.5, 5), rep(0, p - s)) err = rt(n, 2) Y = X %*% beta + err group = c(rep(1, 5), rep(2, 5), rep(3, p - s)) fit.group = conquer.reg(X, Y, lambda = 0.05, tau = 0.7, penalty = "group", group = group) beta.group = fit.group$coeff
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.