View source: R/regularizeSEMInterface.R
adaptiveLasso | R Documentation |
Implements adaptive lasso regularization for structural equation models. The penalty function is given by:
p( x_j) = p( x_j) = \frac{1}{w_j}\lambda| x_j|
Adaptive lasso regularization will set parameters to zero if \lambda
is large enough.
adaptiveLasso(
lavaanModel,
regularized,
weights = NULL,
lambdas = NULL,
nLambdas = NULL,
reverse = TRUE,
curve = 1,
method = "glmnet",
modifyModel = lessSEM::modifyModel(),
control = lessSEM::controlGlmnet()
)
lavaanModel |
model of class lavaan |
regularized |
vector with names of parameters which are to be regularized. If you are unsure what these parameters are called, use getLavaanParameters(model) with your lavaan model object |
weights |
labeled vector with weights for each of the parameters in the model. If you are unsure what these parameters are called, use getLavaanParameters(model) with your lavaan model object. If set to NULL, the default weights will be used: the inverse of the absolute values of the unregularized parameter estimates |
lambdas |
numeric vector: values for the tuning parameter lambda |
nLambdas |
alternative to lambda: If alpha = 1, lessSEM can automatically compute the first lambda value which sets all regularized parameters to zero. It will then generate nLambda values between 0 and the computed lambda. |
reverse |
if set to TRUE and nLambdas is used, lessSEM will start with the largest lambda and gradually decrease lambda. Otherwise, lessSEM will start with the smallest lambda and gradually increase it. |
curve |
Allows for unequally spaced lambda steps (e.g., .01,.02,.05,1,5,20). If curve is close to 1 all lambda values will be equally spaced, if curve is large lambda values will be more concentrated close to 0. See ?lessSEM::curveLambda for more information. |
method |
which optimizer should be used? Currently implemented are ista and glmnet. With ista, the control argument can be used to switch to related procedures (currently gist). |
modifyModel |
used to modify the lavaanModel. See ?modifyModel. |
control |
used to control the optimizer. This element is generated with the controlIsta and controlGlmnet functions. See ?controlIsta and ?controlGlmnet for more details. |
Identical to regsem, models are specified using lavaan. Currently,
most standard SEM are supported. lessSEM also provides full information
maximum likelihood for missing data. To use this functionality,
fit your lavaan model with the argument sem(..., missing = 'ml')
.
lessSEM will then automatically switch to full information maximum likelihood
as well.
Adaptive lasso regularization:
Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476), 1418–1429. https://doi.org/10.1198/016214506000000735
Regularized SEM
Huang, P.-H., Chen, H., & Weng, L.-J. (2017). A Penalized Likelihood Method for Structural Equation Modeling. Psychometrika, 82(2), 329–354. https://doi.org/10.1007/s11336-017-9566-9
Jacobucci, R., Grimm, K. J., & McArdle, J. J. (2016). Regularized Structural Equation Modeling. Structural Equation Modeling: A Multidisciplinary Journal, 23(4), 555–566. https://doi.org/10.1080/10705511.2016.1154793
For more details on GLMNET, see:
Friedman, J., Hastie, T., & Tibshirani, R. (2010). Regularization Paths for Generalized Linear Models via Coordinate Descent. Journal of Statistical Software, 33(1), 1–20. https://doi.org/10.18637/jss.v033.i01
Yuan, G.-X., Chang, K.-W., Hsieh, C.-J., & Lin, C.-J. (2010). A Comparison of Optimization Methods and Software for Large-scale L1-regularized Linear Classification. Journal of Machine Learning Research, 11, 3183–3234.
Yuan, G.-X., Ho, C.-H., & Lin, C.-J. (2012). An improved GLMNET for l1-regularized logistic regression. The Journal of Machine Learning Research, 13, 1999–2030. https://doi.org/10.1145/2020408.2020421
For more details on ISTA, see:
Beck, A., & Teboulle, M. (2009). A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2(1), 183–202. https://doi.org/10.1137/080716542
Gong, P., Zhang, C., Lu, Z., Huang, J., & Ye, J. (2013). A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems. Proceedings of the 30th International Conference on Machine Learning, 28(2)(2), 37–45.
Parikh, N., & Boyd, S. (2013). Proximal Algorithms. Foundations and Trends in Optimization, 1(3), 123–231.
Model of class regularizedSEM
library(lessSEM)
# Identical to regsem, lessSEM builds on the lavaan
# package for model specification. The first step
# therefore is to implement the model in lavaan.
dataset <- simulateExampleData()
lavaanSyntax <- "
f =~ l1*y1 + l2*y2 + l3*y3 + l4*y4 + l5*y5 +
l6*y6 + l7*y7 + l8*y8 + l9*y9 + l10*y10 +
l11*y11 + l12*y12 + l13*y13 + l14*y14 + l15*y15
f ~~ 1*f
"
lavaanModel <- lavaan::sem(lavaanSyntax,
data = dataset,
meanstructure = TRUE,
std.lv = TRUE)
# Regularization:
lsem <- adaptiveLasso(
# pass the fitted lavaan model
lavaanModel = lavaanModel,
# names of the regularized parameters:
regularized = paste0("l", 6:15),
# in case of lasso and adaptive lasso, we can specify the number of lambda
# values to use. lessSEM will automatically find lambda_max and fit
# models for nLambda values between 0 and lambda_max. For the other
# penalty functions, lambdas must be specified explicitly
nLambdas = 50)
# use the plot-function to plot the regularized parameters:
plot(lsem)
# the coefficients can be accessed with:
coef(lsem)
# if you are only interested in the estimates and not the tuning parameters, use
coef(lsem)@estimates
# or
estimates(lsem)
# elements of lsem can be accessed with the @ operator:
lsem@parameters[1,]
# fit Measures:
fitIndices(lsem)
# The best parameters can also be extracted with:
coef(lsem, criterion = "AIC")
# or
estimates(lsem, criterion = "AIC")
#### Advanced ###
# Switching the optimizer #
# Use the "method" argument to switch the optimizer. The control argument
# must also be changed to the corresponding function:
lsemIsta <- adaptiveLasso(
lavaanModel = lavaanModel,
regularized = paste0("l", 6:15),
nLambdas = 50,
method = "ista",
control = controlIsta())
# Note: The results are basically identical:
lsemIsta@parameters - lsem@parameters
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.