A beneficial property of the lasso penalty is that it shrinks coefficients to zero. A less beneficial property is that, in the process, the lasso tends to overshrink the large coefficients too far to zero. Authors have argued that the lasso should "be considered as a variable screener rather than a model selector" (Su et al., 2017). It has a high true positive rate (i.e., high probability that a true predictor will be selected) at the cost of high false positive rate (i.e., high probability that a non-predictor will be selected).
When fitting prediction rule ensembles (PREs), this may be a nuisance, especially when we want to enforce sparsity. In what follows, we show how we can easily select a smaller number of terms for the final ensemble, while retaining relatively high predictive accuracy, using the relaxed lasso.
The relaxed lasso was proposed by Meinshausen (2007). Investigations of Su et al. (2107) provide insight on why the relaxed lasso is beneficial[^1]. Hastie, Tibshirani & Tibshirani (2017) propose a simplified version of the relaxed lasso, which is implemented in package glmnet
and can be employed in package pre
. Hastie et al. (2017) find that "best subset selection generally performing better in high signal-to-noise (SNR) ratio regimes, and the lasso better in low SNR regimes" and that "the relaxed lasso [...] is the overall winner, performing just about as well as the lasso in low SNR scenarios, and as well as best subset selection in high SNR scenarios". A short introduction to the relaxed lasso is provided in glmnet
vignette "The Relaxed lasso" (accessible in R
by typing vignette("relax", "glmnet")
.
[^1]: As explained by Su et al. (2017), the shrinkage of the lasso penalty introduces pseudo-noise. When the regularization parameter $\lambda$ is large, the estimated coefficients are seriously biased downwards, and the residuals still contain much of the effects associated with already selected variables. As strong variables get picked up, the shrinkage inflates pseudo-noise. If non-predictors are at least slightly correlated with this pseudo-noise, they will get picked up as predictors. Thus, for variable selection we might want a somewhat larger value of $\lambda$, but to reduce the pseudo-noise, we want to deshrink the non-zero coefficients.
library("pre")
We fit a PRE to predict Ozone
and inspect the result:
airq <- airquality[complete.cases(airquality), ] set.seed(42) airq.ens <- pre(Ozone ~ ., data = airq) airq.ens
tmp <- print(airq.ens)
What if we find an ensemble of r round(nrow(tmp) - 1)
rules too complex and we want to retain only five rules? We could extract the fitted lasso path, and choose the penalty parameter so that we retain only five rules:
tab <- airq.ens$glmnet.fit$glmnet.fit tab <- print(tab)
plot(airq.ens$glmnet.fit)
plot(airq.ens$glmnet.fit) tab <- airq.ens$glmnet.fit$glmnet.fit tab <- print(tab)
tab[9:16, ]
From the Df
and Lambda
column, we can see that a $\lambda$ value of r tab$Lambda[which(tab$Df == 5)[1]]
would result in a final ensemble comprising five terms. However, from the plot above, which shows the cross-validated error ($y$-axis) against the value of penalty parameter $\lambda$ on the lower $x$-axis, and the corresponding number of selected terms on the upper $x$-axis, we see that this would yield substantially higher error. In part, this is due to overshrinkage, which we can mitigate using the relaxed lasso, which 'unshrinks' the non-zero coefficients.
We can use the relaxed lasso by specifying relax = TRUE
when fitting rule ensemble using function pre
:
set.seed(42) airq.ens.rel <- pre(Ozone ~ ., data = airq, relax = TRUE)
If we specify relax = TRUE
, the gamma
argument (see ?cv.glmnet
for documentation on arguments relax
and gamma
) will by default be set to a range of five values in the interval [0, 1]. This can be overruled by specifying different values for argument gamma
in the call to function pre
.
Let us take a look at the regularization paths for the relaxed fits:
plot(airq.ens.rel$glmnet.fit)
We obtained one regularization path for each value of $\gamma$. The path with $\gamma = 1$ is the default lasso path. Lower values of $\gamma$ 'unshrink' the value of the non-zero coefficients of the lasso towards their unpenalized values. We see that for the $\lambda$ value yielding the minimum MSE (indicated by the left-most vertical dotted line), the value of $\gamma$ does not make a lot of difference for the MSE, but when $\lambda$ values increase, higher values of $\gamma$ tend to yield lower MSE.
For model selection using the "lambda.min"
criterion, by default the $\lambda$ and $\gamma$ combination yielding the lowest CV error is returned. For the "lambda.1se"
criterion, the $\lambda$ and $\gamma$ combination yielding the sparsest solution within 1 standard error of the error criterion of the minimum is returned:
fit <- airq.ens.rel$glmnet.fit$relaxed mat <- data.frame(lambda.1se = c(fit$lambda.1se, fit$gamma.1se, fit$nzero.1se), lambda.min = c(fit$lambda.min, fit$gamma.min, fit$nzero.min), row.names = c("lamda", "gamma", "# of non-zero terms")) mat
Thus, as the dotted vertical lines in the plots already suggest, with the default "lambda.1se"
criterion, a final model with r fit$nzero.1se
terms will be selected, with coefficients obtained using a $\lambda$ value of r round(fit$lambda.1se, digits = 3L)
and a $\gamma$ value of r fit$gamma.1se
. With the "lambda.min"
criterion, we obtain a more complex fit; $\gamma = 0$ still yields the lowest CV error. Note that use of "lambda.min"
increases the likelihood of overfitting, because function pre
uses the same data to extract the rules and fit the penalized regression, so in most cases the default "lambda.1se"
criterion can be expected to provide a less complex, better generalizable, often more accurate fit.
The default of function pre
is to use the "lambda.1se"
criterion. When relax = TRUE
has been specified in the call to function pre
, the default of all functions and S3
methods applied to objects of class pre
(print
, plot
, coef
, predict
, importance
, explain
, cvpre
, singleplot
, pairplot
, interact
) is to use the solution obtained with "lambda.1se"
and the $\gamma$ value yielding lowest CV error at that value of $\lambda$. This can be overruled by specifying a different value of $\lambda$ (penalty.par.val
) and/or $\gamma$ (gamma
). Some examples:
summary(airq.ens.rel) summary(airq.ens.rel, penalty = "lambda.min") summary(airq.ens.rel, penalty = 8, gamma = 0) summary(airq.ens.rel, penalty = 8, gamma = 1)
Note how lowest CV error is indeed obtained with the "lambda.min"
criterion, while the default "lambda.1se"
yields a sparser model, with accuracy within 1 standard error of "lambda.min"
. If we want to go (much) sparser, we need to specify a lower value for the $\lambda$ penalty, and a lower value of $\gamma$ should likely be preferred, to retain good-enough predictive accuracy.
Some rules for specification of $\lambda$ and $\gamma$:
If a numeric value of $\lambda$ has been supplied, a (numeric) value for $\gamma$ must be supplied.
Otherwise (if the default "lambda.1se"
criterion is employed, or "lambda.min"
specified), the $\gamma$ value yielding lowest CV error (at the $\lambda$ value associated with the specified criterion) will be used; this $\gamma$ value can be overruled by supplying the desired $\gamma$ value to the gamma
argument.
Multiple values of $\gamma$ can be passed to function pre
, but all other methods and functions accept only a single value for $\gamma$ (this differs from several glmnet
functions) .
If a specific $\lambda$ value is supplied, results are returned for a penalty parameter value that was used in the path, and closest to the specified value.
Also note that in the code chunk above we refer to the penalty.par.val
argument by abbreviating it to penalty
; this has the same effect as writing penalty.par.val
in full.
Using $\gamma = 0$ amounts to a forward stepwise selection approach, with entry order of the variables (rules and linear terms) determined by the lasso. This approach can be useful if we want a rule ensemble with low complexity and high generalizability, and especially when we want to decide a-priori on the number of terms we want to retain. By specifying a high value of $\lambda$, we can retain a small number of rules, while specifying $\gamma = 0$ will provide unbiased (unpenalized) coefficients. This avoids the overshrinking of large coefficients. In terms of predictive accuracy, this approach may not perform best, but if low complexity (interpretability) is most important, this is a very useful approach, which does not reduce predictive accuracy too much.
To use forward stepwise regression with variable entry order determined by the lasso, we specify a $\gamma$ value of 0, and specify the number of variables we want to retain through specification of $\lambda$ (penalty.par.val
). To find the value of $\lambda$ corresponding to the number of terms one want to retain, check (results not shown for space considerations):
airq.ens.rel$glmnet.fit$glmnet.fit
Here, we use the value of $\lambda$ that we found earlier to yield a five-term ensemble:
coefs <- coef(airq.ens.rel, gamma = 1, penalty = 8) coefs[coefs$coefficient != 0, ] coefs <- coef(airq.ens.rel, gamma = 0, penalty = 8) coefs[coefs$coefficient != 0, ]
Note we have retained the exact same set of terms with the unpenalized relaxed lasso ($\gamma = 0$) as with the default (non-relaxed) lasso ($\gamma = 1$), but the terms obtained different coefficient values. The CV error estimates (returned by fuction summary
above) indicate that this is beneficial for prediction.
To evaluate predictive accuracy of the final fitted model, and to estimate generalization error to unseen observations, cross-validation would be more appropriate than using training data for fitting and evaluating the model. To that end, function cvpre
can for example be used.
In case you obtained different results, these results were obtained using the following:
sessionInfo()
Hastie, T., Tibshirani, R., & Tibshirani, R. J. (2017). Extended comparisons of best subset selection, forward stepwise selection, and the lasso. arXiv:1707.08692, https://arxiv.org/abs/1707.08692.
Meinshausen, N. (2007). Relaxed lasso. Computational Statistics & Data Analysis, 52(1), 374-393. https://doi.org/10.1016/j.csda.2006.12.019
Su, W., Bogdan, M., & Candes, E. (2017). False discoveries occur early on the lasso path. The Annals of Statistics, 45(5), 2133-2150. https://doi.org/10.1214/16-AOS1521
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.