layer_zero_padding_2d: Zero-padding layer for 2D input (e.g. picture).

layer_zero_padding_2dR Documentation

Zero-padding layer for 2D input (e.g. picture).

Description

This layer can add rows and columns of zeros at the top, bottom, left and right side of an image tensor.

Usage

layer_zero_padding_2d(object, padding = list(1L, 1L), data_format = NULL, ...)

Arguments

object

Object to compose the layer with. A tensor, array, or sequential model.

padding

Int, or list of 2 ints, or list of 2 lists of 2 ints.

  • If int: the same symmetric padding is applied to height and width.

  • If list of 2 ints: interpreted as two different symmetric padding values for height and width: ⁠(symmetric_height_pad, symmetric_width_pad)⁠.

  • If list of 2 lists of 2 ints: interpreted as ⁠((top_pad, bottom_pad), (left_pad, right_pad))⁠.

data_format

A string, one of "channels_last" (default) or "channels_first". The ordering of the dimensions in the inputs. "channels_last" corresponds to inputs with shape ⁠(batch_size, height, width, channels)⁠ while "channels_first" corresponds to inputs with shape ⁠(batch_size, channels, height, width)⁠. When unspecified, uses image_data_format value found in your Keras config file at ⁠~/.keras/keras.json⁠ (if exists). Defaults to "channels_last".

...

For forward/backward compatability.

Value

The return value depends on the value provided for the first argument. If object is:

  • a keras_model_sequential(), then the layer is added to the sequential model (which is modified in place). To enable piping, the sequential model is also returned, invisibly.

  • a keras_input(), then the output tensor from calling layer(input) is returned.

  • NULL or missing, then a Layer instance is returned.

Example

input_shape <- c(1, 1, 2, 2)
x <- op_reshape(seq_len(prod(input_shape)), input_shape)
x
## tf.Tensor(
## [[[[1 2]
##    [3 4]]]], shape=(1, 1, 2, 2), dtype=int32)

y <- layer_zero_padding_2d(x, padding = 1)
y
## tf.Tensor(
## [[[[0 0]
##    [0 0]
##    [0 0]
##    [0 0]]
##
##   [[0 0]
##    [1 2]
##    [3 4]
##    [0 0]]
##
##   [[0 0]
##    [0 0]
##    [0 0]
##    [0 0]]]], shape=(1, 3, 4, 2), dtype=int32)

Input Shape

4D tensor with shape:

  • If data_format is "channels_last": ⁠(batch_size, height, width, channels)⁠

  • If data_format is "channels_first": ⁠(batch_size, channels, height, width)⁠

Output Shape

4D tensor with shape:

  • If data_format is "channels_last": ⁠(batch_size, padded_height, padded_width, channels)⁠

  • If data_format is "channels_first": ⁠(batch_size, channels, padded_height, padded_width)⁠

See Also

Other reshaping layers:
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_flatten()
layer_permute()
layer_repeat_vector()
layer_reshape()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_3d()

Other layers:
Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_group_query_attention()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_random_brightness()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_rotation()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()


rstudio/keras documentation built on April 22, 2024, 11:43 p.m.