Extends the glmnet package with "relaxation", done by running glmnet once on the entire predictor matrix, then again on each different subset of variables from along the regularization path. Relaxation may lead to improved prediction accuracy for truly sparse data generating models, as well as fewer false positives (i.e. fewer noncontributing predictors in the final model). Penalty may be lasso (alpha = 1) or elastic net (0 < alpha < 1). For this version, family may be "gaussian" or "binomial" only. Takes advantage of fast FORTRAN code from the glmnet package.

Author | Stephan Ritter, Alan Hubbard |

Date of publication | 2013-08-16 18:29:00 |

Maintainer | Stephan Ritter <stephanritterRpacks@gmail.com> |

License | GPL (>= 2) |

Version | 0.3-2 |

http://cran.r-project.org/package=relaxnet |

**cv.relaxnet:** Cross-Validation for relaxnet Models

**predict.cv.relaxnet:** Predict Methods for cv.relaxnet and cv.alpha.relaxnet Objects

**predict.relaxnet:** Predict Method for '"relaxnet"' Objects

**print.relaxnet:** Print Method for relaxnet Objects

**relaxnet:** Relaxation (as in Relaxed Lasso, Meinshausen 2007) applied to...

**relaxnet-package:** Relaxation (as in Relaxed Lasso, Meinshausen 2007) Applied to...

**summary.relaxnet:** Generate and print summaries of class '"relaxnet"' objects.

relaxnet

relaxnet/NAMESPACE

relaxnet/R

relaxnet/R/predict.cv.relaxnet.R
relaxnet/R/predict.relaxnet.R
relaxnet/R/relaxnet.R
relaxnet/R/cv.relaxnet.R
relaxnet/R/cv.alpha.relaxnet.R
relaxnet/R/predict.cv.alpha.relaxnet.R
relaxnet/R/summary.relaxnet.R
relaxnet/R/zzz.R
relaxnet/MD5

relaxnet/DESCRIPTION

relaxnet/ChangeLog

relaxnet/man

relaxnet/man/predict.cv.relaxnet.Rd
relaxnet/man/print.relaxnet.Rd
relaxnet/man/cv.relaxnet.Rd
relaxnet/man/summary.relaxnet.Rd
relaxnet/man/relaxnet-package.Rd
relaxnet/man/relaxnet.Rd
relaxnet/man/predict.relaxnet.Rd
Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.