Description Usage Arguments Details References See Also
View source: R/layers-convolutional.R
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | layer_conv_3d_transpose(
object,
filters,
kernel_size,
strides = c(1, 1, 1),
padding = "valid",
output_padding = NULL,
data_format = NULL,
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
input_shape = NULL,
batch_input_shape = NULL,
batch_size = NULL,
dtype = NULL,
name = NULL,
trainable = NULL,
weights = NULL
)
|
object |
Model or layer object |
filters |
Integer, the dimensionality of the output space (i.e. the number of output filters in the convolution). |
kernel_size |
An integer or list of 3 integers, specifying the depth, height, and width of the 3D convolution window. Can be a single integer to specify the same value for all spatial dimensions. |
strides |
An integer or list of 3 integers, specifying the strides of
the convolution along the depth, height and width.. Can be a single integer
to specify the same value for all spatial dimensions. Specifying any stride
value != 1 is incompatible with specifying any |
padding |
one of |
output_padding |
An integer or list of 3 integers,
specifying the amount of padding along the depth, height, and width
of the output tensor. Can be a single integer to specify the same
value for all spatial dimensions. The amount of output padding along a
given dimension must be lower than the stride along that same dimension.
If set to |
data_format |
A string, one of |
activation |
Activation function to use. If you don't specify anything, no
activation is applied (ie. "linear" activation: |
use_bias |
Boolean, whether the layer uses a bias vector. |
kernel_initializer |
Initializer for the |
bias_initializer |
Initializer for the bias vector. |
kernel_regularizer |
Regularizer function applied to the |
bias_regularizer |
Regularizer function applied to the bias vector. |
activity_regularizer |
Regularizer function applied to the output of the layer (its "activation"). |
kernel_constraint |
Constraint function applied to the kernel matrix. |
bias_constraint |
Constraint function applied to the bias vector. |
input_shape |
Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model. |
batch_input_shape |
Shapes, including the batch size. For instance,
|
batch_size |
Fixed batch size for layer |
dtype |
The data type expected by the input, as a string ( |
name |
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided. |
trainable |
Whether the layer weights will be updated during training. |
weights |
Initial weights for layer. |
When using this layer as the first layer in a model, provide the keyword argument
input_shape
(list of integers, does not include the sample axis), e.g.
input_shape = list(128, 128, 128, 3)
for a 128x128x128 volume with 3 channels if
data_format="channels_last"
.
Other convolutional layers:
layer_conv_1d()
,
layer_conv_2d_transpose()
,
layer_conv_2d()
,
layer_conv_3d()
,
layer_conv_lstm_2d()
,
layer_cropping_1d()
,
layer_cropping_2d()
,
layer_cropping_3d()
,
layer_depthwise_conv_2d()
,
layer_separable_conv_1d()
,
layer_separable_conv_2d()
,
layer_upsampling_1d()
,
layer_upsampling_2d()
,
layer_upsampling_3d()
,
layer_zero_padding_1d()
,
layer_zero_padding_2d()
,
layer_zero_padding_3d()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.