dot-compute_attention_component: antecedent: Tensor with shape [batch, length, channels]...

Description Usage

Description

antecedent: Tensor with shape [batch, length, channels] depth: specifying projection layer depth filter_width: how wide should the attention component be padding: must be in: c("VALID", "SAME", "LEFT")

Usage

1
2
3
4
5
6
7
8
.compute_attention_component(
  antecedent,
  depth,
  filter_width = 1L,
  padding = "SAME",
  name = "c",
  vars_3d_num_heads = 0L
)

ifrit98/transformR documentation built on Nov. 26, 2019, 2:14 a.m.