antecedent: Tensor with shape [batch, length, channels] depth: specifying projection layer depth filter_width: how wide should the attention component be padding: must be in: c("VALID", "SAME", "LEFT")
1 2 3 4 5 6 7 8 | .compute_attention_component(
antecedent,
depth,
filter_width = 1L,
padding = "SAME",
name = "c",
vars_3d_num_heads = 0L
)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.