Description Usage Arguments Details Value Author(s) References See Also Examples

The function "camel.slime" implements LAD/L1 Lasso, SQRT/L2 Lasso, and carlibrated Dantizg selector using L1 regularization.

1 2 3 |

`Y` |
The |

`X` |
The |

`lambda` |
A sequence of decresing positive value to control the regularization. Typical usage is to leave the input |

`nlambda` |
The number of values used in |

`lambda.min.ratio` |
The smallest value for |

`method` |
Dantzig selector is applied if |

`q` |
The loss function used in Lq Lasso. It is only applicable when |

`prec` |
Stopping criterion. The default value is 1e-4. |

`max.ite` |
The iteration limit. The default value is 1e4. |

`mu` |
The smoothing parameter. The default value is 0.01. |

`intercept` |
Whether the intercept is included in the model. The defulat value is |

`verbose` |
Tracing information is disabled if |

Calibrated Linear Regression adjust the regularization with respect to the noise level. Thus it achieves both improved finite sample performance and tuning insensitiveness.

An object with S3 class `"camel.slim"`

is returned:

`beta` |
A matrix of regression estimates whose columns correspond to regularization parameters. |

`intercept` |
The value of intercepts corresponding to regularization parameters. |

`Y` |
The value of |

`X` |
The value of |

`lambda` |
The sequence of regularization parameters |

`nlambda` |
The number of values used in |

`method` |
The |

`sparsity` |
The sparsity levels of the solution path. |

`ite` |
A list of vectors where ite[[1]] is the number of external iteration and ite[[2]] is the number of internal iteration with the i-th entry corresponding to the i-th regularization parameter. |

`verbose` |
The |

Xingguo Li, Tuo Zhao, and Han Liu

Maintainer: Xingguo Li <xingguo.leo@gmail.com>

1. A. Belloni, V. Chernozhukov and L. Wang. Pivotal recovery of sparse signals via conic programming. *Biometrika*, 2012.

2. L. Wang. L1 penalized LAD estimator for high dimensional linear regression. *Journal of Multivariate Analysis*, 2013.

3. E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. *Annals of Statistics*, 2007.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | ```
## Generate the design matrix and regression coefficient vector
n = 200
d = 400
X = matrix(rnorm(n*d), n, d)
beta = c(3,2,0,1.5,rep(0,d-4))
## Generate response using Gaussian noise, and fit a sparse linear model using SQRT Lasso
eps.sqrt = rnorm(n)
Y.sqrt = X%*%beta + eps.sqrt
out.sqrt = camel.slim(X = X, Y = Y.sqrt, lambda = seq(0.8,0.2,length.out=5))
## Generate response using Cauchy noise, and fit a sparse linear model using LAD Lasso
eps.lad = rt(n = n, df = 1)
Y.lad = X%*%beta + eps.lad
out.lad = camel.slim(X = X, Y = Y.lad, q = 1, lambda = seq(0.5,0.2,length.out=5))
## Visualize the solution path
plot(out.sqrt)
plot(out.lad)
``` |

```
Loading required package: lattice
Loading required package: igraph
Attaching package: 'igraph'
The following objects are masked from 'package:stats':
decompose, spectrum
The following object is masked from 'package:base':
union
Loading required package: MASS
Loading required package: Matrix
Sparse Linear Regression with L1 Regularization.
SQRT Lasso regression via MFISTA.
Sparse Linear Regression with L1 Regularization.
LAD Lasso regression via MFISTA.
```

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.