time_distributed: This layer wrapper allows to apply a layer to every temporal...

View source: R/layer-wrappers.R

time_distributedR Documentation

This layer wrapper allows to apply a layer to every temporal slice of an input

Description

This layer wrapper allows to apply a layer to every temporal slice of an input

Usage

time_distributed(object, layer, ...)

Arguments

object

What to compose the new Layer instance with. Typically a Sequential model or a Tensor (e.g., as returned by layer_input()). The return value depends on object. If object is:

  • missing or NULL, the Layer instance is returned.

  • a Sequential model, the model with an additional layer is returned.

  • a Tensor, the output tensor from layer_instance(object) is returned.

layer

a tf.keras.layers.Layer instance.

...

standard layer arguments.

Details

Every input should be at least 3D, and the dimension of index one of the first input will be considered to be the temporal dimension.

Consider a batch of 32 video samples, where each sample is a 128x128 RGB image with channels_last data format, across 10 timesteps. The batch input shape is ⁠(32, 10, 128, 128, 3)⁠.

You can then use TimeDistributed to apply the same Conv2D layer to each of the 10 timesteps, independently:

input <- layer_input(c(10, 128, 128, 3))
conv_layer <- layer_conv_2d(filters = 64, kernel_size = c(3, 3))
output <- input %>% time_distributed(conv_layer)
output$shape # TensorShape([None, 10, 126, 126, 64])

Because TimeDistributed applies the same instance of Conv2D to each of the timestamps, the same set of weights are used at each timestamp.

See Also

Other layer wrappers: bidirectional()


keras documentation built on May 29, 2024, 3:20 a.m.