Runs glmnet once on the full x matrix, then again on each distinct subset of columns from along the solution path. The penalty may be lasso (`alpha`

= 1) or elastic net (0 < `alpha`

< 1). The outcome (`y`

) may be continuous or binary.

1 2 3 4 5 6 7 8 9 10 |

`x` |
Input matrix, of dimension nobs x nvars; each row is an observation vector. Can be in sparse matrix format (inherit from class |

`y` |
response variable. Quantitative for |

`family` |
Response type (see above). |

`nlambda` |
The number of |

`alpha` |
Elastic net mixing parameter (see |

`relax` |
Should the model be relaxed. If FALSE, only the main glmnet model is run and no relaxed models are. |

`relax.nlambda` |
Like nlambda but for secondary (relaxed) models. |

`relax.max.vars` |
Maximum number of variables for relaxed models. No relaxation will be done for subsets along the regularization path with number of variables greater than relax.max.vars. If |

`lambda` |
See (see |

`relax.lambda.index` |
Vector which indexes the lambda argument and specifyies the values at which a relaxed model should be fit. Optional and meant primarily for use by |

`relax.lambda.list` |
List of lambda values to use for the relaxed models. Optional and meant primarily for use by |

`...` |
Further aruments passed to glmnet. Use with caution as this has not yet been tested. For example, setting |

Version 1.9-5 of glmnet no longer allows single-column x. This broke relaxnet. As a temporary fix, relaxed models containing a single variable now just run glm instead of glmnet, and only the full least squares (or logistic regression, for family = "binomial") solution is considered for that relaxed model. All relaxed models containing more than one variable, as well as the main model, still use the complete glmnet solution path.

Object of class code"relaxnet" with the following components:

`call` |
A copy of the call which produced this object |

`main.glmnet.fit` |
The object resulting from running |

`relax` |
The value of the relax argument. If this is |

`relax.glmnet.fits` |
A list containing the secondary |

`relax.num.vars` |
Vector giving the number of variables in each "relaxed" model. |

`relax.lambda.index` |
This vector indexes result$main.glmnet.fit$lambda and gives the lambda values at which the relax.glmnet.fits were obtained. |

`total.time` |
Total time in seconds to produce this result. |

`main.fit.time` |
Time in seconds to produce the main glmnet fit. |

`relax.keep` |
In certain cases some of the relaxed models are removed after fitting. |

`relax.fit.times` |
Vector of times in seconds to produce secondary "relaxed" models. |

This is a preliminary release and several additional features are planned for later versions.

Stephan Ritter, with design contributions from Alan Hubbard.

Much of the code (and some help file content) is adapted from the glmnet package, whose authors are Jerome Friedman, Trevor Hastie and Rob Tibshirani.

Stephan Ritter and Alan Hubbard, Tech report (forthcoming).

Jerome Friedman, Trevor Hastie, Rob Tibshirani (2010) “Regularization Paths for Generalized Linear Models via Coordinate Descent.” *Journal of Statistical Software* **33**(1)

Nicolai Meinshausen (2007) “Relaxed Lasso” *Computational Statistics and Data Analysis* **52**(1), 374-393

`glmnet`

, `cv.relaxnet`

, `predict.relaxnet`

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | ```
## generate predictor matrix
nobs <- 100
nvars <- 200
set.seed(23)
x <- matrix(rnorm(nobs * nvars), nobs, nvars)
## make sure it has unique colnames
colnames(x) <- paste("x", 1:ncol(x), sep = "")
## let y depend on first 5 columns plus noise
y <- rowSums(x[, 1:5]) + rnorm(nrow(x))
## default is family = "gaussian"
result1 <- relaxnet(x, y)
summary(result1)
## now fit family = "binomial" model
y.bin <- rbinom(nrow(x), 1, prob = plogis(0.2 * rowSums(x[, 1:5])))
result2 <- relaxnet(x, y.bin, family = "binomial")
summary(result2)
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

All documentation is copyright its authors; we didn't write any of that.