Description Usage Arguments See Also
View source: R/layers-activations.R
It follows: f(x) = x
for x > theta
, f(x) = 0
otherwise.
1 2 3 4 5 6 7 8 9 10 11 |
object |
Model or layer object |
theta |
float >= 0. Threshold location of activation. |
input_shape |
Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model. |
batch_input_shape |
Shapes, including the batch size. For instance,
|
batch_size |
Fixed batch size for layer |
dtype |
The data type expected by the input, as a string ( |
name |
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
trainable |
Whether the layer weights will be updated during training. |
weights |
Initial weights for layer. |
Zero-bias autoencoders and the benefits of co-adapting features.
Other activation layers:
layer_activation_elu()
,
layer_activation_leaky_relu()
,
layer_activation_parametric_relu()
,
layer_activation_relu()
,
layer_activation_selu()
,
layer_activation_softmax()
,
layer_activation()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.