multihead_attention: Multihead attention mechanism query [batch, seqlen, depth_q]...

Description Usage

View source: R/attention-layers.R

Description

Multihead attention mechanism query [batch, seqlen, depth_q] memory [batch, seqlen, depth_m]

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
multihead_attention(
  query,
  memory = NULL,
  bias = NULL,
  key_depth = 64L,
  value_depth = 64L,
  output_depth = 128L,
  num_heads = 4L,
  dropout = 0,
  attention_type = "dot_product",
  q_filter_width = 1L,
  kv_filter_width = 1L,
  q_padding = "SAME",
  kv_padding = "SAME",
  max_area_width = 1L,
  max_area_height = 1L,
  memory_height = 1L,
  area_key_mode = "mean",
  area_value_mode = "sum",
  vars_3d = FALSE
)

ifrit98/transformR documentation built on Nov. 26, 2019, 2:14 a.m.