View source: R/attention-layers.R
Input query, key, and value matrices are used to compute dot product attention. (Vaswani et al. 2017) q: a Tensor with shape [batch, length_q, depth_k] k: a Tensor with shape [batch, length_kv, depth_k] v: a Tensor with shape [batch, length_kv, depth_v]
| 1 2 3 4 5 6 7 8 | layer_dot_product_attention_1d(
  q,
  k,
  v,
  bias = NULL,
  dropout = 0,
  name = "dot_product_attention"
)
 | 
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.