Implements the Stability Approach to Regularization Selection (StARS) for Lasso

1 2 3 | ```
lasso.stars(x, y, rep.num = 20, lambda = NULL, nlambda = 100,
lambda.min.ratio = 0.001, stars.thresh = 0.1, sample.ratio = NULL,
alpha = 1, verbose = TRUE)
``` |

`x` |
The |

`y` |
The |

`rep.num` |
The number of subsampling for StARS. The default value is |

`lambda` |
A sequence of decresing positive numbers to control regularization. Typical usage is to leave the input |

`nlambda` |
The number of regularization paramters. The default value is |

`lambda.min.ratio` |
The smallest value for |

`stars.thresh` |
The threshold of the variability in StARS. The default value is |

`sample.ratio` |
The subsampling ratio. The default value is |

`alpha` |
The tuning parameter for the elastic-net regression. The default value is |

`verbose` |
If |

StARS selects the optimal regularization parameter based on the variability of the solution path. It chooses the least sparse graph among all solutions with the same variability. An alternative threshold `0.05`

is chosen under the assumption that the model is correctly specified. In applications, the model is usually an approximation of the true model, `0.1`

is a safer choice. The implementation is based on the popular package "glmnet".

An object with S3 class "stars" is returned:

`path` |
The solution path of regression coefficients (in an |

`lambda` |
The regularization parameters used in Lasso |

`opt.index` |
The index of the optimal regularization parameter. |

`opt.beta` |
The optimal regression coefficients. |

`opt.lambda` |
The optimal regularization parameter. |

`Variability` |
The variability along the solution path. |

This function can only work under the setting when `d>1`

Tuo Zhao, Han Liu, Kathryn Roeder, John Lafferty, and Larry Wasserman

Maintainers: Tuo Zhao<tourzhao@andrew.cmu.edu>; Han Liu <hanliu@cs.jhu.edu>

1.Han Liu, Kathryn Roeder and Larry Wasserman. Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models. *Advances in Neural Information Processing Systems*, 2010.

2.Jerome Friedman, Trevor Hastie and Rob Tibshirani. Regularization Paths for Generalized Linear Models via Coordinate Descent. *Journal of Statistical Software*, Vol.33, No.1, 2008.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | ```
#generate data
x = matrix(rnorm(50*80),50,80)
beta = c(3,2,1.5,rep(0,77))
y = rnorm(50) + x%*%beta
#StARS for Lasso
z1 = lasso.stars(x,y)
summary(z1)
plot(z1)
#StARS for Lasso
z2 = lasso.stars(x,y, stars.thresh = 0.05)
summary(z2)
plot(z2)
#StARS for Lasso
z3 = lasso.stars(x,y,rep.num = 50)
summary(z3)
plot(z3)
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.