A node-attention global pooling layer. Pools a graph by learning attention coefficients to sum node features.
This layer computes: \mjdeqn\alpha = \textrmsoftmax( \boldsymbolX \boldsymbola); \\boldsymbolX' = \sum\limits_i=1^N \alpha_i \cdot \boldsymbolX _ i where \mjeqn\boldsymbola \in \mathbbR^F is a trainable vector. Note that the softmax is applied across nodes, and not across features.
Mode: single, disjoint, mixed, batch.
Input
Node features of shape ([batch], N, F)
;
Graph IDs of shape (N, )
(only in disjoint mode);
Output
Pooled node features of shape (batch, F)
(if single mode, shape will
be (1, F)
).
1 2 3 4 5 6 7 | layer_global_attn_sum_pool(
object,
attn_kernel_initializer = "glorot_uniform",
attn_kernel_regularizer = NULL,
attn_kernel_constraint = NULL,
...
)
|
attn_kernel_initializer |
initializer for the attention weights |
attn_kernel_regularizer |
regularization applied to the attention kernel matrix |
attn_kernel_constraint |
constraint applied to the attention kernel matrix |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.