View source: R/layers-activations.R
layer_activation_parametric_relu | R Documentation |
It follows: f(x) = alpha * x`` for
x < 0,
f(x) = xfor
x >= 0', where
alpha is a learned array with the same shape as x.
layer_activation_parametric_relu(
object,
alpha_initializer = "zeros",
alpha_regularizer = NULL,
alpha_constraint = NULL,
shared_axes = NULL,
input_shape = NULL,
batch_input_shape = NULL,
batch_size = NULL,
dtype = NULL,
name = NULL,
trainable = NULL,
weights = NULL
)
object |
What to compose the new
|
alpha_initializer |
Initializer function for the weights. |
alpha_regularizer |
Regularizer for the weights. |
alpha_constraint |
Constraint for the weights. |
shared_axes |
The axes along which to share learnable parameters for the activation function. For example, if the incoming feature maps are from a 2D convolution with output shape (batch, height, width, channels), and you wish to share parameters across space so that each filter only has one set of parameters, set shared_axes=c(1, 2). |
input_shape |
Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model. |
batch_input_shape |
Shapes, including the batch size. For instance,
|
batch_size |
Fixed batch size for layer |
dtype |
The data type expected by the input, as a string ( |
name |
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
trainable |
Whether the layer weights will be updated during training. |
weights |
Initial weights for layer. |
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.
Other activation layers:
layer_activation()
,
layer_activation_elu()
,
layer_activation_leaky_relu()
,
layer_activation_relu()
,
layer_activation_selu()
,
layer_activation_softmax()
,
layer_activation_thresholded_relu()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.