A gated attention global pooling layer as presented by Li et al. (2017).
This layer computes: \mjdeqn\boldsymbolX' = \sum\limits_i=1^N (\sigma(\boldsymbolX \boldsymbolW _ 1 + \boldsymbolb _ 1) \odot (\boldsymbolX \boldsymbolW _ 2 + \boldsymbolb _ 2))_i where \mjeqn\sigma is the sigmoid activation function.
Mode: single, disjoint, mixed, batch.
Input
Node features of shape ([batch], N, F)
;
Graph IDs of shape (N, )
(only in disjoint mode);
Output
Pooled node features of shape (batch, channels)
(if single mode,
shape will be (1, channels)
).
1 2 3 4 5 6 7 8 9 10 11 |
channels |
integer, number of output channels |
kernel_initializer |
NA |
bias_initializer |
initializer for the bias vectors |
kernel_regularizer |
regularization applied to the kernel matrices |
bias_regularizer |
regularization applied to the bias vectors |
kernel_constraint |
constraint applied to the kernel matrices |
bias_constraint |
constraint applied to the bias vectors. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.