layer_sag_pool: SAGPool

Description Usage Arguments

View source: R/layers_pool.R

Description

\loadmathjax

A self-attention graph pooling layer as presented by Lee et al. (2019).

Mode: single, disjoint.

This layer computes the following operations:

\mjdeqn\boldsymbol

y = \textrmGNN(\boldsymbolA, \boldsymbolX); \;\;\;\;\boldsymboli = \textrmrank(\boldsymboly, K); \;\;\;\;\boldsymbolX' = (\boldsymbolX \odot \textrmtanh(\boldsymboly))_\boldsymboli; \;\;\;\;\boldsymbolA' = \boldsymbolA _ \boldsymboli, \boldsymboli

where \mjeqn \textrmrank(\boldsymboly, K) returns the indices of the top K values of \mjeqn\boldsymboly, and \mjeqn\textrmGNN consists of one GraphConv layer with no activation. \mjeqnK is defined for each graph as a fraction of the number of nodes.

This layer temporarily makes the adjacency matrix dense in order to compute \mjeqn\boldsymbolA'. If memory is not an issue, considerable speedups can be achieved by using dense graphs directly. Converting a graph from sparse to dense and back to sparse is an expensive operation.

Input

Output

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
layer_sag_pool(
  object,
  ratio,
  return_mask = FALSE,
  sigmoid_gating = FALSE,
  kernel_initializer = "glorot_uniform",
  kernel_regularizer = NULL,
  kernel_constraint = NULL,
  ...
)

Arguments

ratio

float between 0 and 1, ratio of nodes to keep in each graph

return_mask

boolean, whether to return the binary mask used for pooling

sigmoid_gating

boolean, use a sigmoid gating activation instead of a tanh

kernel_initializer

initializer for the weights

kernel_regularizer

regularization applied to the weights

kernel_constraint

constraint applied to the weights


rdinnager/rspektral documentation built on June 12, 2021, 1:26 a.m.