details_linear_reg_glmnet | R Documentation |
glmnet::glmnet()
uses regularized least squares to fit models with numeric outcomes.
For this engine, there is a single mode: regression
This model has 2 tuning parameters:
penalty
: Amount of Regularization (type: double, default: see below)
mixture
: Proportion of Lasso Penalty (type: double, default: 1.0)
A value of mixture = 1
corresponds to a pure lasso model, while
mixture = 0
indicates ridge regression.
The penalty
parameter has no default and requires a single numeric
value. For more details about this, and the glmnet
model in general,
see glmnet-details.
linear_reg(penalty = double(1), mixture = double(1)) %>% set_engine("glmnet") %>% translate()
## Linear Regression Model Specification (regression) ## ## Main Arguments: ## penalty = 0 ## mixture = double(1) ## ## Computational engine: glmnet ## ## Model fit template: ## glmnet::glmnet(x = missing_arg(), y = missing_arg(), weights = missing_arg(), ## alpha = double(1), family = "gaussian")
Factor/categorical predictors need to be converted to numeric values
(e.g., dummy or indicator variables) for this engine. When using the
formula method via fit()
, parsnip will
convert factor columns to indicators.
Predictors should have the same scale. One way to achieve this is to
center and scale each so that each predictor has mean zero and a
variance of one. By default, glmnet::glmnet()
uses
the argument standardize = TRUE
to center and scale the data.
This model can utilize case weights during model fitting. To use them,
see the documentation in case_weights and the examples
on tidymodels.org
.
The fit()
and fit_xy()
arguments have arguments called
case_weights
that expect vectors of case weights.
This model object contains data that are not required to make predictions. When saving the model for the purpose of prediction, the size of the saved object might be substantially reduced by using functions from the butcher package.
The “Fitting and Predicting with parsnip” article contains
examples
for linear_reg()
with the "glmnet"
engine.
Hastie, T, R Tibshirani, and M Wainwright. 2015. Statistical Learning with Sparsity. CRC Press.
Kuhn, M, and K Johnson. 2013. Applied Predictive Modeling. Springer.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.