h2o_mlp_train: Wrapper for training a h2o.deeplearning model as part of a...

View source: R/mlp.R

h2o_mlp_trainR Documentation

Wrapper for training a h2o.deeplearning model as part of a parsnip 'mlp' h2o engine

Description

Wrapper for training a h2o.deeplearning model as part of a parsnip 'mlp' h2o engine

Usage

h2o_mlp_train(
  formula,
  data,
  l2 = 0,
  hidden_dropout_ratios = 0,
  hidden = 100,
  epochs = 10,
  activation = "Rectifier",
  stopping_rounds = 0,
  validation = 0,
  ...
)

Arguments

formula

formula

data

data.frame of training data

l2

numeric, l2 regulation parameter, default = 0

hidden_dropout_ratios

dropout ratio for a single hidden layer (default = 0)

hidden

integer, number of neurons in the hidden layer (default = c(200, 200))

epochs

integer, number of epochs (default = 10)

activation

character, activation function. Must be one of: "Tanh", "TanhWithDropout", "Rectifier", "RectifierWithDropout", "Maxout", "MaxoutWithDropout". Defaults to "Rectifier. If 'hidden_dropout_ratios' > 0 then the equivalent activation function with dropout is used.

stopping_rounds

An integer specifying the number of training iterations without improvement before stopping. If 'stopping_rounds = 0' (the default) then early stopping is disabled. If 'validation' is used, performance is base on the validation set; otherwise the training set is used.

validation

A positive number. If on '[0, 1)' the value, 'validation' is a random proportion of data in 'x' and 'y' that are used for performance assessment and potential early stopping. If 1 or greater, it is the _number_ of training set samples use for these purposes.

...

other arguments not currently used

Value

evaluated h2o model call


stevenpawley/h2oparsnip documentation built on June 20, 2022, 12:48 p.m.