activation_elu | R Documentation |
The exponential linear unit (ELU) with alpha > 0
is defined as:
x
if x > 0
alpha * exp(x) - 1
if x < 0
ELUs have negative values which pushes the mean of the activations closer to zero.
Mean activations that are closer to zero enable faster learning as they bring the gradient closer to the natural gradient. ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer.
activation_elu(x, alpha = 1)
x |
Input tensor. |
alpha |
Numeric. See description for details. |
A tensor, the result from applying the activation to the input tensor x
.
Other activations:
activation_celu()
activation_exponential()
activation_gelu()
activation_glu()
activation_hard_shrink()
activation_hard_sigmoid()
activation_hard_tanh()
activation_leaky_relu()
activation_linear()
activation_log_sigmoid()
activation_log_softmax()
activation_mish()
activation_relu()
activation_relu6()
activation_selu()
activation_sigmoid()
activation_silu()
activation_soft_shrink()
activation_softmax()
activation_softplus()
activation_softsign()
activation_sparse_plus()
activation_sparsemax()
activation_squareplus()
activation_tanh()
activation_tanh_shrink()
activation_threshold()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.