r descr_models("mlp", "h2o")

Tuning Parameters

defaults <- 
  tibble::tibble(parsnip = c("hidden_units", "penalty", "dropout", "epochs", "learn_rate", "activation"),
                 default = c("200L", "0.0", "0.5", "10", "0.005", "'see below'"))

param <-
  mlp() %>% 
  set_engine("h2o") %>% 
  make_parameter_list(defaults)

This model has r nrow(param) tuning parameters:

param$item

The naming of activation functions in [h2o::h2o.deeplearning()] differs from parsnip's conventions. Currently, only "relu" and "tanh" are supported and will be converted internally to "Rectifier" and "Tanh" passed to the fitting function.

penalty corresponds to l2 penalty. [h2o::h2o.deeplearning()] also supports specifying the l1 penalty directly with the engine argument l1.

Other engine arguments of interest:

Translation from parsnip to the original package (regression)

[agua::h2o_train_mlp] is a wrapper around [h2o::h2o.deeplearning()].

mlp(
  hidden_units = integer(1),
  penalty = double(1),
  dropout = double(1),
  epochs = integer(1),
  learn_rate = double(1),
  activation = character(1)
) %>%  
  set_engine("h2o") %>% 
  set_mode("regression") %>% 
  translate()

Translation from parsnip to the original package (classification)

mlp(
  hidden_units = integer(1),
  penalty = double(1),
  dropout = double(1),
  epochs = integer(1),
  learn_rate = double(1),
  activation = character(1)
) %>% 
  set_engine("h2o") %>% 
  set_mode("classification") %>% 
  translate()

Preprocessing requirements



By default, [h2o::h2o.deeplearning()] uses the argument standardize = TRUE to center and scale all numeric columns.

Initializing h2o


Saving fitted model objects




Try the parsnip package in your browser

Any scripts or data that you put into this service are public.

parsnip documentation built on Aug. 18, 2023, 1:07 a.m.