Description Usage Arguments Value References
This fits an elastic net model to continuous responses using the square root method for selecting an optimal penalty parameter. No cross validation is required. Data are automatically unit scaled and centered. Coefficients are returned on the original scale of the inputs. Hence, it is not neccessary to center and/or standardize the inputs here. Note that the returned coefficients have the L2 penalty relaxed after fitting per Zou & Hastie (2005), rather than the naive estimates returned by glmnet.
1 |
formula |
a model formula |
data |
a data frame |
alpha |
a value between 0 and 1 for the mixing parameter. defaults to 0.5. |
conf.level |
the confidence level to use in setting the penalty. the default is 0.95. |
a model fit
Zou, H. & Trevor, T. (2005). Regularization and Variable Selection via the Elastic Net. Journal of the Royal Statistical Society, Series B, 67(2), 301-320.
Belloni A., Chernozhukov, V., & Wang, L. (2011) Square-root lasso: pivotal recovery of sparse signals via conic programming. Biometrika, 98(4):791-806.
van de Geer S. (2016) The Square-Root Lasso. In: Estimation and Testing Under Sparsity. Lecture Notes in Mathematics, vol 2159. Springer, Cham
Raninen, E. & Ollila, E. (2017) Scaled and square-root elastic net. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, 2017, pp. 4336-4340. doi: 10.1109/ICASSP.2017.7952975
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.