View source: R/attention-layers.R View source: R/attention-utils.R
antecedent: Tensor with shape [batch, length, channels] depth: specifying projection layer depth filter_width: how wide should the attention component be padding: must be in: c("VALID", "SAME", "LEFT")
antecedent: Tensor with shape [batch, length, channels] depth: specifying projection layer depth filter_width: how wide should the attention component be padding: must be in: c("valid", "same", "left")
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | compute_attention_component(
antecedent,
depth,
filter_width = 1L,
padding = "valid",
name = "c",
vars_3d_num_heads = 0L
)
compute_attention_component(
antecedent,
depth,
filter_width = 1L,
padding = "valid",
name = "c",
vars_3d_num_heads = 0L
)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.