RNN: Recurrent neural network layers

RNNR Documentation

Recurrent neural network layers

Description

Recurrent neural network layers

Usage

SimpleRNN(units, activation = "tanh", use_bias = TRUE,
  kernel_initializer = "glorot_uniform",
  recurrent_initializer = "orthogonal", bias_initializer = "zeros",
  kernel_regularizer = NULL, recurrent_regularizer = NULL,
  bias_regularizer = NULL, activity_regularizer = NULL,
  kernel_constraint = NULL, recurrent_constraint = NULL,
  bias_constraint = NULL, dropout = 0, recurrent_dropout = 0,
  input_shape = NULL)

GRU(units, activation = "tanh", recurrent_activation = "hard_sigmoid",
  use_bias = TRUE, kernel_initializer = "glorot_uniform",
  recurrent_initializer = "orthogonal", bias_initializer = "zeros",
  kernel_regularizer = NULL, recurrent_regularizer = NULL,
  bias_regularizer = NULL, activity_regularizer = NULL,
  kernel_constraint = NULL, recurrent_constraint = NULL,
  bias_constraint = NULL, dropout = 0, recurrent_dropout = 0,
  input_shape = NULL)

LSTM(units, activation = "tanh", recurrent_activation = "hard_sigmoid",
  use_bias = TRUE, kernel_initializer = "glorot_uniform",
  recurrent_initializer = "orthogonal", bias_initializer = "zeros",
  unit_forget_bias = TRUE, kernel_regularizer = NULL,
  recurrent_regularizer = NULL, bias_regularizer = NULL,
  activity_regularizer = NULL, kernel_constraint = NULL,
  recurrent_constraint = NULL, bias_constraint = NULL, dropout = 0,
  recurrent_dropout = 0, return_sequences = FALSE, input_shape = NULL)

Arguments

units

Positive integer, dimensionality of the output space.

activation

Activation function to use

use_bias

Boolean, whether the layer uses a bias vector.

kernel_initializer

Initializer for the kernel weights matrix, used for the linear transformation of the inputs.

recurrent_initializer

Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrentstate.

bias_initializer

Initializer for the bias vector

kernel_regularizer

Regularizer function applied to the kernel weights matrix

recurrent_regularizer

Regularizer function applied to the recurrent_kernel weights matrix

bias_regularizer

Regularizer function applied to the bias vector

activity_regularizer

Regularizer function applied to the output of the layer (its "activation")

kernel_constraint

Constraint function applied to the kernel weights matrix

recurrent_constraint

Constraint function applied to the recurrent_kernel weights matrix

bias_constraint

Constraint function applied to the bias vector

dropout

Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.

recurrent_dropout

Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.

input_shape

only need when first layer of a model; sets the input shape of the data

recurrent_activation

Activation function to use for the recurrent step

unit_forget_bias

Boolean. If True, add 1 to the bias of the forget gate at initialization.

return_sequences

Boolean. Whether to return the last output in the output sequence, or the full sequence.

Author(s)

Taylor B. Arnold, taylor.arnold@acm.org

References

Chollet, Francois. 2015. Keras: Deep Learning library for Theano and TensorFlow.

See Also

Other layers: Activation, ActivityRegularization, AdvancedActivation, BatchNormalization, Conv, Dense, Dropout, Embedding, Flatten, GaussianNoise, LayerWrapper, LocallyConnected, Masking, MaxPooling, Permute, RepeatVector, Reshape, Sequential

Examples

if(keras_available()) {
  X_train <- matrix(sample(0:19, 100 * 100, TRUE), ncol = 100)
  Y_train <- rnorm(100)
  
  mod <- Sequential()
  mod$add(Embedding(input_dim = 20, output_dim = 10,
                    input_length = 100))
  mod$add(Dropout(0.5))
  
  mod$add(LSTM(16))
  mod$add(Dense(1))
  mod$add(Activation("sigmoid"))
  
  keras_compile(mod, loss = "mse", optimizer = RMSprop())
  keras_fit(mod, X_train, Y_train, epochs = 3, verbose = 0)
}

kerasR documentation built on Aug. 17, 2022, 5:06 p.m.