# slim: Sparse Linear Regression using Nonsmooth Loss Functions and... In flare: Family of Lasso Regression

## Description

The function "slim" implements a family of Lasso variants for estimating high dimensional sparse linear models including Dantzig Selector, LAD Lasso, SQRT Lasso, Lq Lasso for estimating high dimensional sparse linear model. We adopt the alternating direction method of multipliers (ADMM) and convert the original optimization problem into a sequential L1-penalized least square minimization problem, which can be efficiently solved by combining the linearization and multi-stage screening of varialbes. Missing values can be tolerated for Dantzig selector in the design matrix and response vector.

## Usage

 1 2 3 4 slim(X, Y, lambda = NULL, nlambda = NULL, lambda.min.value = NULL,lambda.min.ratio = NULL, rho = 1, method="lq", q = 2, res.sd = FALSE, prec = 1e-5, max.ite = 1e5, verbose = TRUE) 

## Arguments

 Y The n-dimensional response vector. X The n by d design matrix. d ≥ 2 is required. lambda A sequence of decresing positive numbers to control the regularization. Typical usage is to leave the input lambda = NULL and have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Users can also specify a sequence to override this. Default value is from lambda.max to lambda.min.ratio*lambda.max. For Lq regression, the default value of lambda.max is π√{\log(d)/n}. For Dantzig selector, the default value of lambda.max is the minimum regularization parameter, which yields an all-zero estiamtes. nlambda The number of values used in lambda. Default value is 5. lambda.min.value The smallest value for lambda, as a fraction of the uppperbound (lambda.max) of the regularization parameter. The program can automatically generate lambda as a sequence of length = nlambda starting from lambda.max to lambda.min.ratio*lambda.max in log scale. The default value is \log(d)/n for for Dantzig selector 0.3*lambda.max for Lq Lasso. lambda.min.ratio The smallest ratio of the value for lambda. The default value is 0.3 for Lq Lasso and 0.5 for Dantzig selector. rho The penalty parameter used in ADMM. The default value is √{d}. method Dantzig selector is applied if method = "dantzig" and L_q Lasso is applied if method = "lq". Standard Lasso is provided if method = "lasso". The default value is "lq". q The loss function used in Lq Lasso. It is only applicable when method = "lq" and must be in [1,2]. The default value is 2. res.sd Flag of whether the response varialbles are standardized. The default value is FALSE. prec Stopping criterion. The default value is 1e-5. max.ite The iteration limit. The default value is 1e5. verbose Tracing information printing is disabled if verbose = FALSE. The default value is TRUE.

## Details

Standard Lasso

\min {\frac{1}{2n}}|| Y - X β ||_2^2 + λ || β ||_1

Dantzig selector solves the following optimization problem

\min || β ||_1, \quad \textrm{s.t. } || X'(Y - X β) ||_{∞} < λ

L_q loss Lasso solves the following optimization problem

\min n^{-\frac{1}{q}}|| Y - X β ||_q + λ || β ||_1

where 1<= q <=2. Lq Lasso is equivalent to LAD Lasso and SQR Lasso when q=1 and q=2 respectively.

## Value

An object with S3 class "slim" is returned:

 beta A matrix of regression estimates whose columns correspond to regularization parameters. intercept The value of intercepts corresponding to regularization parameters. Y The value of Y used in the program. X The value of X used in the program. lambda The sequence of regularization parameters lambda used in the program. nlambda The number of values used in lambda. method The method from the input. sparsity The sparsity levels of the solution path. ite A list of vectors where ite[[1]] is the number of external iteration and ite[[2]] is the number of internal iteration with the i-th entry corresponding to the i-th regularization parameter. verbose The verbose from the input.

## Author(s)

Xingguo Li, Tuo Zhao, Lie Wang, Xiaoming Yuan and Han Liu
Maintainer: Xingguo Li <[email protected]>

## References

1. E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Annals of Statistics, 2007.
2. A. Belloni, V. Chernozhukov and L. Wang. Pivotal recovery of sparse signals via conic programming. Biometrika, 2012.
3. L. Wang. L1 penalized LAD estimator for high dimensional linear regression. Journal of Multivariate Analysis, 2012.
4. J. Liu and J. Ye. Efficient L1/Lq Norm Regularization. Technical Report, 2010. 5. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends in Machine Learning, 2011. 6. B. He and X. Yuan. On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers. Technical Report, 2012.

flare-package, print.slim, plot.slim, coef.slim and predict.slim.

## Examples

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 ## load library library(flare) ## generate data n = 50 d = 100 X = matrix(rnorm(n*d), n, d) beta = c(3,2,0,1.5,rep(0,d-4)) eps = rnorm(n) Y = X%*%beta + eps nlamb = 5 ratio = 0.3 ## Regression with "dantzig", general "lq" and "lasso" respectively out1 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="dantzig") out2 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="lq",q=1) out3 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="lq",q=1.5) out4 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="lq",q=2) out5 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="lasso") ## Display results print(out4) plot(out4) coef(out4) 

### Example output

Loading required package: lattice

Attaching package: 'igraph'

The following objects are masked from 'package:stats':

decompose, spectrum

The following object is masked from 'package:base':

union

Sparse Linear Regression with L1 Regularization.
Dantzig selector with screening.

slim options summary:
5 lambdas used:
[1] 2.490 1.840 1.360 1.010 0.748
Method = dantzig
Degree of freedom: 0 -----> 3
Runtime: 0.006701469 secs
Sparse Linear Regression with L1 Regularization.

slim options summary:
5 lambdas used:
[1] 0.522 0.456 0.398 0.348 0.303
Method = lq
q = 1  loss, LAD Lasso
Degree of freedom: 0 -----> 4
Runtime: 0.02032733 secs
Sparse Linear Regression with L1 Regularization.
LQ norm regrelarized regression (q = 1.5 )  with screening.

slim options summary:
5 lambdas used:
[1] 0.608 0.511 0.429 0.361 0.303
Method = lq
q = 1.5 loss
Degree of freedom: 1 -----> 4
Runtime: 1.847683 secs
Sparse Linear Regression with L1 Regularization.
Square root Lasso with screening.

slim options summary:
5 lambdas used:
[1] 0.685 0.559 0.456 0.372 0.303
Method = lq
q = 2 loss, SQRT Lasso
Degree of freedom: 1 -----> 5
Runtime: 0.0257616 secs
Sparse Linear Regression with L1 Regularization.
Lasso with screening.

slim options summary:
5 lambdas used:
[1] 2.490 1.470 0.870 0.514 0.303
Method = lasso
Degree of freedom: 0 -----> 7
Runtime: 0.0009505749 secs

slim options summary:
5 lambdas used:
[1] 0.685 0.559 0.456 0.372 0.303
Method = lq
q = 2 loss, SQRT Lasso
Degree of freedom: 1 -----> 5
Runtime: 0.0257616 secs

Values of estimated coefficients:
index               1           2           3
lambda         0.6847      0.5587      0.4559
intercept      0.1121      0.1497     0.03843
beta 1      1.845e-17      0.6728        1.72
beta 2              0           0      0.9914
beta 3              0           0           0


flare documentation built on May 29, 2017, 5:39 p.m.