keras_mlp() fits a generalized linear model for binary outcomes. A
linear combination of the predictors is used to model the log odds of an
For this engine, there is a single mode: classification
This model has one tuning parameter:
penalty: Amount of Regularization (type: double, default: 0.0)
penalty, the amount of regularization is only L2 penalty (i.e.,
ridge or weight decay).
logistic_reg(penalty = double(1)) %>% set_engine("keras") %>% translate()
1 2 3 4 5 6 7 8 9 10
## Logistic Regression Model Specification (classification) ## ## Main Arguments: ## penalty = double(1) ## ## Computational engine: keras ## ## Model fit template: ## parsnip::keras_mlp(x = missing_arg(), y = missing_arg(), penalty = double(1), ## hidden_units = 1, act = "linear")
keras_mlp() is a parsnip wrapper around keras code for
neural networks. This model fits a linear regression as a network with a
single hidden unit.
Factor/categorical predictors need to be converted to numeric values
(e.g., dummy or indicator variables) for this engine. When using the
formula method via
fit.model_spec(), parsnip will
convert factor columns to indicators.
Predictors should have the same scale. One way to achieve this is to center and scale each so that each predictor has mean zero and a variance of one.
The “Fitting and Predicting with parsnip” article contains
logistic_reg() with the
Hoerl, A., & Kennard, R. (2000). Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics, 42(1), 80-86.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.