View source: R/attention-layers.R
Lambda layer implementation of multihead_attention
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | layer_multihead_attention(
query,
memory = NULL,
bias = NULL,
key_depth = 64L,
value_depth = 64L,
output_depth = 128L,
num_heads = 4L,
dropout = 0,
attention_type = "dot_product",
q_filter_width = 1L,
kv_filter_width = 1L,
q_padding = "SAME",
kv_padding = "SAME",
max_area_width = 1L,
max_area_height = 1L,
memory_height = 1L,
area_key_mode = "mean",
area_value_mode = "sum",
vars_3d = TRUE
)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.