layer_global_attn_sum_pool: GlobalAttnSumPool

Description Usage Arguments

View source: R/layers_pool.R

Description

\loadmathjax

A node-attention global pooling layer. Pools a graph by learning attention coefficients to sum node features.

This layer computes: \mjdeqn\alpha = \textrmsoftmax( \boldsymbolX \boldsymbola); \\boldsymbolX' = \sum\limits_i=1^N \alpha_i \cdot \boldsymbolX _ i where \mjeqn\boldsymbola \in \mathbbR^F is a trainable vector. Note that the softmax is applied across nodes, and not across features.

Mode: single, disjoint, mixed, batch.

Input

Output

Usage

1
2
3
4
5
6
7
layer_global_attn_sum_pool(
  object,
  attn_kernel_initializer = "glorot_uniform",
  attn_kernel_regularizer = NULL,
  attn_kernel_constraint = NULL,
  ...
)

Arguments

attn_kernel_initializer

initializer for the attention weights

attn_kernel_regularizer

regularization applied to the attention kernel matrix

attn_kernel_constraint

constraint applied to the attention kernel matrix


rdinnager/rspektral documentation built on June 12, 2021, 1:26 a.m.