This is an experimental feature. There may be bugs; use carefully!
library(lessSEM)
The mixedPenalty
function allows you to add multiple penalties to a single model.
For instance, you may want to regularize both loadings and regressions in a SEM.
In this case, using the same penalty (e.g., lasso) for both types of penalties may
actually not be what you want to use because the penalty function is sensitive to
the scales of the parameters. Instead, you may want to use two separate lasso
penalties for loadings and regressions. Similarly, separate penalties for
different parameters have, for instance, been proposed in multi-group models
(Geminiani et al., 2021).
Important: You cannot impose two penalties on the same parameter!
Models are fitted with the glmnet or ista optimizer. Note that the optimizers differ in which penalties they support. The following table provides an overview:
| Penalty | Function | glmnet | ista | | --- | ---- | ---- | ---- | | lasso | addLasso | x | x | | elastic net | addElasticNet | x* | - | | cappedL1 | addCappedL1 | x | x | | lsp | addLsp | x | x | | scad | addScad | x | x | | mcp | addMcp | x | x |
By default, glmnet will be used. Note that the elastic net penalty can only be combined with other elastic net penalties.
In the following model, we will allow for cross-loadings (c2
-c4
).
We want to regularize both, cross-loadings and regression coefficients (r1
- r3
)
model <- ' # latent variable definitions ind60 =~ x1 + x2 + x3 + c2*y2 + c3*y3 + c4*y4 dem60 =~ y1 + y2 + y3 + y4 dem65 =~ y5 + y6 + y7 + c*y8 # regressions dem60 ~ r1*ind60 dem65 ~ r2*ind60 + r3*dem60 ' lavaanModel <- sem(model, data = PoliticalDemocracy)
Next, we add separate lasso penalties for the loadings and the regressions:
mp <- lavaanModel |> mixedPenalty() |> addLasso(regularized = c("c2", "c3", "c4"), lambdas = seq(0,1,.1)) |> addLasso(regularized = c("r1", "r2", "r3"), lambdas = seq(0,1,.2))
Note that we can use the pipe-operator to add multiple penalties. They don't have to be the same; the following would also work:
mp <- lavaanModel |> mixedPenalty() |> addLasso(regularized = c("c2", "c3", "c4"), lambdas = seq(0,1,.1)) |> addScad(regularized = c("r1", "r2", "r3"), lambdas = seq(0,1,.2), thetas = 3.7)
To fit the model, we use the fit
- function:
fitMp <- fit(mp)
To check which parameter has be regularized with which penalty, we can look
at the penalty
statement in the resulting object:
fitMp@penalty #> ind60=~x2 ind60=~x3 c2 c3 c4 dem60=~y2 dem60=~y3 dem60=~y4 dem65=~y6 #> "none" "none" "lasso" "lasso" "lasso" "none" "none" "none" "none" #> dem65=~y7 c r1 r2 r3 x1~~x1 x2~~x2 x3~~x3 y2~~y2 #> "none" "none" "scad" "scad" "scad" "none" "none" "none" "none" #> y3~~y3 y4~~y4 y1~~y1 y5~~y5 y6~~y6 y7~~y7 y8~~y8 ind60~~ind60 dem60~~dem60 #> "none" "none" "none" "none" "none" "none" "none" "none" "none" #> dem65~~dem65 #> "none"
We can access the best parameters according to the BIC with:
coef(fitMp, criterion = "BIC") #> #> Tuning ||--|| Estimates #> ---------------------------- ||--|| ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- #> tuningParameterConfiguration ||--|| ind60=~x2 ind60=~x3 c2 c3 c4 dem60=~y2 dem60=~y3 dem60=~y4 #> ============================ ||--|| ========== ========== ========== ========== ========== ========== ========== ========== #> 11.0000 ||--|| 2.1817 1.8188 . . . 1.3540 1.0440 1.2995 #> #> #> ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- #> dem65=~y6 dem65=~y7 c r1 r2 r3 x1~~x1 x2~~x2 x3~~x3 y2~~y2 y3~~y3 #> ========== ========== ========== ========== ========== ========== ========== ========== ========== ========== ========== #> 1.2585 1.2825 1.3098 1.4738 0.4533 0.8644 0.0818 0.1184 0.4673 6.4896 5.3399 #> #> #> ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ #> y4~~y4 y1~~y1 y5~~y5 y6~~y6 y7~~y7 y8~~y8 ind60~~ind60 dem60~~dem60 dem65~~dem65 #> ========== ========== ========== ========== ========== ========== ============ ============ ============ #> 2.8871 1.9419 2.3901 4.3428 3.5096 2.9403 0.4482 3.8717 0.1149
The tuningParameterConfiguration
refers to the rows in the lambda, theta, and alpha
matrices that resulted in the best fit:
getTuningParameterConfiguration(regularizedSEMMixedPenalty = fitMp, tuningParameterConfiguration = 11) #> parameter penalty lambda alpha #> ind60=~x2 ind60=~x2 none 0 0 #> ind60=~x3 ind60=~x3 none 0 0 #> c2 c2 lasso 1 1 #> c3 c3 lasso 1 1 #> c4 c4 lasso 1 1 #> dem60=~y2 dem60=~y2 none 0 0 #> dem60=~y3 dem60=~y3 none 0 0 #> dem60=~y4 dem60=~y4 none 0 0 #> dem65=~y6 dem65=~y6 none 0 0 #> dem65=~y7 dem65=~y7 none 0 0 #> c c none 0 0 #> r1 r1 scad 0 0 #> r2 r2 scad 0 0 #> r3 r3 scad 0 0 #> x1~~x1 x1~~x1 none 0 0 #> x2~~x2 x2~~x2 none 0 0 #> x3~~x3 x3~~x3 none 0 0 #> y2~~y2 y2~~y2 none 0 0 #> y3~~y3 y3~~y3 none 0 0 #> y4~~y4 y4~~y4 none 0 0 #> y1~~y1 y1~~y1 none 0 0 #> y5~~y5 y5~~y5 none 0 0 #> y6~~y6 y6~~y6 none 0 0 #> y7~~y7 y7~~y7 none 0 0 #> y8~~y8 y8~~y8 none 0 0 #> ind60~~ind60 ind60~~ind60 none 0 0 #> dem60~~dem60 dem60~~dem60 none 0 0 #> dem65~~dem65 dem65~~dem65 none 0 0
In this case, the best model has no cross-loadings, but the regressions remained unregularized: The lambda for the cross-loadings is large (1), while the lambda for the regressions is 0 (no regularization).
The glmnet optimizer is typically considerably faster than ista. However, sometimes glmnet may run into issues. In that case, it can help to switch to ista:
mp <- lavaanModel |> # Change the optimizer and the control object: mixedPenalty(method = "ista", control = controlIsta()) |> addLasso(regularized = c("c2", "c3", "c4"), lambdas = seq(0,1,.1)) |> addLasso(regularized = c("r1", "r2", "r3"), lambdas = seq(0,1,.2))
To fit the model, we use the fit
- function:
fitMp <- fit(mp)
coef(fitMp, criterion = "BIC") #> #> Tuning ||--|| Estimates #> ---------------------------- ||--|| ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- #> tuningParameterConfiguration ||--|| ind60=~x2 ind60=~x3 c2 c3 c4 dem60=~y2 dem60=~y3 dem60=~y4 #> ============================ ||--|| ========== ========== ========== ========== ========== ========== ========== ========== #> 11.0000 ||--|| 2.1818 1.8188 . . . 1.3541 1.0441 1.2997 #> #> #> ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- #> dem65=~y6 dem65=~y7 c r1 r2 r3 x1~~x1 x2~~x2 x3~~x3 y2~~y2 y3~~y3 #> ========== ========== ========== ========== ========== ========== ========== ========== ========== ========== ========== #> 1.2586 1.2825 1.3099 1.4740 0.4532 0.8643 0.0818 0.1184 0.4672 6.4892 5.3396 #> #> #> ---------- ---------- ---------- ---------- ---------- ---------- ------------ ------------ ------------ #> y4~~y4 y1~~y1 y5~~y5 y6~~y6 y7~~y7 y8~~y8 ind60~~ind60 dem60~~dem60 dem65~~dem65 #> ========== ========== ========== ========== ========== ========== ============ ============ ============ #> 2.8861 1.9420 2.3904 4.3421 3.5090 2.9392 0.4482 3.8710 0.1159
The tuningParameterConfiguration
refers to the rows in the lambda, theta, and alpha
matrices that resulted in the best fit:
getTuningParameterConfiguration(regularizedSEMMixedPenalty = fitMp, tuningParameterConfiguration = 11) #> parameter penalty lambda theta alpha #> ind60=~x2 ind60=~x2 none 0 0 0 #> ind60=~x3 ind60=~x3 none 0 0 0 #> c2 c2 lasso 1 0 1 #> c3 c3 lasso 1 0 1 #> c4 c4 lasso 1 0 1 #> dem60=~y2 dem60=~y2 none 0 0 0 #> dem60=~y3 dem60=~y3 none 0 0 0 #> dem60=~y4 dem60=~y4 none 0 0 0 #> dem65=~y6 dem65=~y6 none 0 0 0 #> dem65=~y7 dem65=~y7 none 0 0 0 #> c c none 0 0 0 #> r1 r1 lasso 0 0 1 #> r2 r2 lasso 0 0 1 #> r3 r3 lasso 0 0 1 #> x1~~x1 x1~~x1 none 0 0 0 #> x2~~x2 x2~~x2 none 0 0 0 #> x3~~x3 x3~~x3 none 0 0 0 #> y2~~y2 y2~~y2 none 0 0 0 #> y3~~y3 y3~~y3 none 0 0 0 #> y4~~y4 y4~~y4 none 0 0 0 #> y1~~y1 y1~~y1 none 0 0 0 #> y5~~y5 y5~~y5 none 0 0 0 #> y6~~y6 y6~~y6 none 0 0 0 #> y7~~y7 y7~~y7 none 0 0 0 #> y8~~y8 y8~~y8 none 0 0 0 #> ind60~~ind60 ind60~~ind60 none 0 0 0 #> dem60~~dem60 dem60~~dem60 none 0 0 0 #> dem65~~dem65 dem65~~dem65 none 0 0 0
Here is a short run-time comparison of ista and glmnet with the lasso-regularized model from above: Five repetitions using ista took 24.509 seconds, while glmnet took 1.131 seconds. That is, if you can use glmnet with your model, we recommend that you do.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.