build_LSTM: Build LSTM architecture

View source: R/deepRNN.r

build_LSTMR Documentation

Build LSTM architecture

Description

build.LSTM creates a sequential ANN model with stacked lstm layers, an output dense layer and optional dropout layers. For a univariate time series, usually stateful = TRUE and batch_size = 1 with return_sequences = FALSE. For a multivariate time series, usually stateful = FALSE and batch_size = NULL with return_sequences = TRUE.

Usage

build_LSTM(
  features,
  timesteps = 1L,
  batch_size = NULL,
  hidden = NULL,
  dropout = NULL,
  output = list(1, "linear"),
  stateful = FALSE,
  return_sequences = FALSE,
  loss = "mean_squared_error",
  optimizer = "adam",
  metrics = c("mean_absolute_error")
)

Arguments

features

Number of features, e.g. returned by nunits.

timesteps

The number of feature timesteps. A timestep denotes the number of different periods of the values within one sample.

batch_size

Batch size, the number of samples per gradient update, as information within input shape. A batch size should reflect the periodicity of the data, see Gulli/Pal (2017:211), Gulli/Kapoor/Pal (2019:290).

hidden

A data frame with two columns whereby the first column contains the number of hidden units and the second column the activation function. The number of rows determines the number of hidden layers.

dropout

A numeric vector with dropout rates, the fractions of input units to drop or NULL if no dropout is desired.

output

A list with two elements whereby the first element determines the number of output units, e.g. returned by nunits, and the second element the output activation function.

stateful

A logical value indicating whether the last cell state of a LSTM unit at t-1 is used as initial cell state of the unit at period t (TRUE).

return_sequences

A logical value indicating whether an outcome unit produces one value (FALSE) or values per each timestep (TRUE).

loss

Name of objective function or objective function. If the model has multiple outputs, different loss on each output can be used by passing a dictionary or a list of objectives. The loss value that will be minimized by the model will then be the sum of all individual losses.

optimizer

Name of optimizer or optimizer instance.

metrics

Vector or list of metrics to be evaluated by the model during training and testing.

Value

A model object with stacked lstm layers, an output dense layer and optional dropout layers.

References

Gulli, A., Pal, S. (2017): Deep Learning with Keras: Implement neural networks with Keras on Theano and TensorFlow. 2017. Birmingham: Packt Publishing. Gulli, A., Kapoor, A., Pal, S. (2017): Deep Learning with TensorFlow 2 and Keras: Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API. 2. Aufl., 2019. Birmingham: Packt Publishing.

See Also

as_LSTM_X, nunits, keras_model_sequential, layer_dense, layer_dropout, layer_lstm, compile.keras.engine.training.Model.

Other Recurrent Neural Network (RNN): as_LSTM_X(), as_LSTM_Y(), as_LSTM_data_frame(), as_LSTM_period_outcome(), as_lag(), as_timesteps(), fit_LSTM(), get_LSTM_XY(), get_period_shift(), load_weights_ANN(), predict_ANN(), save_weights_ANN(), start_invert_differencing()


stschn/deepANN documentation built on June 25, 2024, 7:27 a.m.