layer_activation_relu: Rectified Linear Unit activation function

Description Usage Arguments See Also

View source: R/layers-activations.R

Description

Rectified Linear Unit activation function

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
layer_activation_relu(
  object,
  max_value = NULL,
  negative_slope = 0,
  threshold = 0,
  input_shape = NULL,
  batch_input_shape = NULL,
  batch_size = NULL,
  dtype = NULL,
  name = NULL,
  trainable = NULL,
  weights = NULL
)

Arguments

object

Model or layer object

max_value

loat, the maximum output value.

negative_slope

float >= 0 Negative slope coefficient.

threshold

float. Threshold value for thresholded activation.

input_shape

Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model.

batch_input_shape

Shapes, including the batch size. For instance, batch_input_shape=c(10, 32) indicates that the expected input will be batches of 10 32-dimensional vectors. batch_input_shape=list(NULL, 32) indicates batches of an arbitrary number of 32-dimensional vectors.

batch_size

Fixed batch size for layer

dtype

The data type expected by the input, as a string (float32, float64, int32...)

name

An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.

trainable

Whether the layer weights will be updated during training.

weights

Initial weights for layer.

See Also

Other activation layers: layer_activation_elu(), layer_activation_leaky_relu(), layer_activation_parametric_relu(), layer_activation_selu(), layer_activation_softmax(), layer_activation_thresholded_relu(), layer_activation()


dfalbel/keras documentation built on Nov. 27, 2019, 8:16 p.m.