Description Format Value Parameters References See Also Examples
This learner provides fitting procedures for elastic net models, including
both lasso (L1) and ridge (L2) penalized regression, using the glmnet
package. The function cv.glmnet
is used to select an
appropriate value of the regularization parameter lambda. For details on
these regularized regression models and glmnet, consider consulting
\insertCiteglmnet;textualsl3).
An R6Class
object inheriting from
Lrnr_base
.
A learner object inheriting from Lrnr_base
with
methods for training and prediction. For a full list of learner
functionality, see the complete documentation of Lrnr_base
.
lambda = NULL
: An optional vector of lambda values to compare.
type.measure = "deviance"
: The loss to use when selecting
lambda. Options documented in cv.glmnet
.
nfolds = 10
: Number of folds to use for internal cross-validation.
alpha = 1
: The elastic net parameter: alpha = 0
is Ridge
(L2-penalized) regression, while alpha = 1
specifies Lasso
(L1-penalized) regression. Values in the closed unit interval specify a
weighted combination of the two penalties. For further details, consult
the documentation of glmnet
.
nlambda = 100
: The number of lambda values to fit. Comparing
fewer values will speed up computation, but may hurt the statistical
performance. For further details, consult the documentation of
cv.glmnet
.
use_min = TRUE
: If TRUE
, the smallest value of the lambda
regularization parameter is used for prediction (i.e.,
lambda = cv_fit$lambda.min
); otherwise, a larger value is used
(i.e., lambda = cv_fit$lambda.1se
). The distinction between the
two variants is clarified in the documentation of
cv.glmnet
.
stratify_cv = FALSE
: Stratify internal cross-validation folds, so
that a binary outcome's prevalence for training is roughly the same in
the training and validation sets of the internal cross-validation
folds? This argument can only be used when the outcome type for
training is binomial; and either the id
node in the task is not
specified, or cv.glmnet
's foldid
argument
is not specified upon initializing the learner.
...
: Other parameters passed to cv.glmnet
and glmnet
.
Other Learners:
Custom_chain
,
Lrnr_HarmonicReg
,
Lrnr_arima
,
Lrnr_bartMachine
,
Lrnr_base
,
Lrnr_bayesglm
,
Lrnr_bilstm
,
Lrnr_caret
,
Lrnr_cv_selector
,
Lrnr_cv
,
Lrnr_dbarts
,
Lrnr_define_interactions
,
Lrnr_density_discretize
,
Lrnr_density_hse
,
Lrnr_density_semiparametric
,
Lrnr_earth
,
Lrnr_expSmooth
,
Lrnr_gam
,
Lrnr_ga
,
Lrnr_gbm
,
Lrnr_glm_fast
,
Lrnr_glm
,
Lrnr_grf
,
Lrnr_gru_keras
,
Lrnr_gts
,
Lrnr_h2o_grid
,
Lrnr_hal9001
,
Lrnr_haldensify
,
Lrnr_hts
,
Lrnr_independent_binomial
,
Lrnr_lightgbm
,
Lrnr_lstm_keras
,
Lrnr_mean
,
Lrnr_multiple_ts
,
Lrnr_multivariate
,
Lrnr_nnet
,
Lrnr_nnls
,
Lrnr_optim
,
Lrnr_pca
,
Lrnr_pkg_SuperLearner
,
Lrnr_polspline
,
Lrnr_pooled_hazards
,
Lrnr_randomForest
,
Lrnr_ranger
,
Lrnr_revere_task
,
Lrnr_rpart
,
Lrnr_rugarch
,
Lrnr_screener_augment
,
Lrnr_screener_coefs
,
Lrnr_screener_correlation
,
Lrnr_screener_importance
,
Lrnr_sl
,
Lrnr_solnp_density
,
Lrnr_solnp
,
Lrnr_stratified
,
Lrnr_subset_covariates
,
Lrnr_svm
,
Lrnr_tsDyn
,
Lrnr_ts_weights
,
Lrnr_xgboost
,
Pipeline
,
Stack
,
define_h2o_X()
,
undocumented_learner
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | data(mtcars)
mtcars_task <- sl3_Task$new(
data = mtcars,
covariates = c(
"cyl", "disp", "hp", "drat", "wt", "qsec", "vs", "am",
"gear", "carb"
),
outcome = "mpg"
)
# simple prediction with lasso penalty
lasso_lrnr <- Lrnr_glmnet$new()
lasso_fit <- lasso_lrnr$train(mtcars_task)
lasso_preds <- lasso_fit$predict()
# simple prediction with ridge penalty
ridge_lrnr <- Lrnr_glmnet$new(alpha = 0)
ridge_fit <- ridge_lrnr$train(mtcars_task)
ridge_preds <- ridge_fit$predict()
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.