layer_dot_product_attention_1d: Input query, key, and value matrices are used to compute dot...

Description Usage

View source: R/attention-layers.R

Description

Input query, key, and value matrices are used to compute dot product attention. (Vaswani et al. 2017) q: a Tensor with shape [batch, length_q, depth_k] k: a Tensor with shape [batch, length_kv, depth_k] v: a Tensor with shape [batch, length_kv, depth_v]

Usage

1
2
3
4
5
6
7
8
layer_dot_product_attention_1d(
  q,
  k,
  v,
  bias = NULL,
  dropout = 0,
  name = "dot_product_attention"
)

ifrit98/transformR documentation built on Nov. 26, 2019, 2:14 a.m.