# Sparse Linear Regression using Nonsmooth Loss Functions and L1 Regularization

### Description

The function "slim" implements a family of Lasso variants for estimating high dimensional sparse linear models including Dantzig Selector, LAD Lasso, SQRT Lasso, Lq Lasso for estimating high dimensional sparse linear model. We adopt the alternating direction method of multipliers (ADMM) and convert the original optimization problem into a sequential L1-penalized least square minimization problem, which can be efficiently solved by combining the linearization and multi-stage screening of varialbes. Missing values can be tolerated for Dantzig selector in the design matrix and response vector.

### Usage

1 2 3 4 |

### Arguments

`Y` |
The |

`X` |
The |

`lambda` |
A sequence of decresing positive numbers to control the regularization. Typical usage is to leave the input |

`nlambda` |
The number of values used in |

`lambda.min.value` |
The smallest value for |

`lambda.min.ratio` |
The smallest ratio of the value for |

`rho` |
The penalty parameter used in |

`method` |
Dantzig selector is applied if |

`q` |
The loss function used in Lq Lasso. It is only applicable when |

`res.sd` |
Flag of whether the response varialbles are standardized. The default value is |

`prec` |
Stopping criterion. The default value is 1e-5. |

`max.ite` |
The iteration limit. The default value is 1e5. |

`verbose` |
Tracing information printing is disabled if |

### Details

Standard Lasso

*
\min {\frac{1}{2n}}|| Y - X β ||_2^2 + λ || β ||_1
*

Dantzig selector solves the following optimization problem

*
\min || β ||_1, \quad \textrm{s.t. } || X'(Y - X β) ||_{∞} < λ
*

*L_q* loss Lasso solves the following optimization problem

*
\min n^{-\frac{1}{q}}|| Y - X β ||_q + λ || β ||_1
*

where *1<= q <=2*. Lq Lasso is equivalent to LAD Lasso and SQR Lasso when *q=1* and *q=2* respectively.

### Value

An object with S3 class `"slim"`

is returned:

`beta` |
A matrix of regression estimates whose columns correspond to regularization parameters. |

`intercept` |
The value of intercepts corresponding to regularization parameters. |

`Y` |
The value of |

`X` |
The value of |

`lambda` |
The sequence of regularization parameters |

`nlambda` |
The number of values used in |

`method` |
The |

`sparsity` |
The sparsity levels of the solution path. |

`ite` |
A list of vectors where ite[[1]] is the number of external iteration and ite[[2]] is the number of internal iteration with the i-th entry corresponding to the i-th regularization parameter. |

`verbose` |
The |

### Author(s)

Xingguo Li, Tuo Zhao, Lie Wang, Xiaoming Yuan and Han Liu

Maintainer: Xingguo Li <xingguo.leo@gmail.com>

### References

1. E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. *Annals of Statistics*, 2007.

2. A. Belloni, V. Chernozhukov and L. Wang. Pivotal recovery of sparse signals via conic programming. *Biometrika*, 2012.

3. L. Wang. L1 penalized LAD estimator for high dimensional linear regression. *Journal of Multivariate Analysis*, 2012.

4. J. Liu and J. Ye. Efficient L1/Lq Norm Regularization. *Technical Report*, 2010.
5. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. *Foundations and Trends in Machine Learning*, 2011.
6. B. He and X. Yuan. On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers. *Technical Report*, 2012.

### See Also

`flare-package`

, `print.slim`

, `plot.slim`

, `coef.slim`

and `predict.slim`

.

### Examples

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | ```
## load library
library(flare)
## generate data
n = 50
d = 100
X = matrix(rnorm(n*d), n, d)
beta = c(3,2,0,1.5,rep(0,d-4))
eps = rnorm(n)
Y = X%*%beta + eps
nlamb = 5
ratio = 0.3
## Regression with "dantzig", general "lq" and "lasso" respectively
out1 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="dantzig")
out2 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="lq",q=1)
out3 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="lq",q=1.5)
out4 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="lq",q=2)
out5 = slim(X=X,Y=Y,nlambda=nlamb,lambda.min.ratio=ratio,method="lasso")
## Display results
print(out4)
plot(out4)
coef(out4)
``` |