layer_top_k_pool: TopKPool

Description Usage Arguments

View source: R/layers_pool.R

Description

\loadmathjax

A gPool/Top-K layer as presented by Gao & Ji (2019) and Cangea et al. (2018).

Mode: single, disjoint.

This layer computes the following operations:

\mjdeqn\boldsymbol

y = \frac\boldsymbolX\p\|\boldsymbolp\|; \;\;\;\;\boldsymboli = \textrmrank(\boldsymboly, K); \;\;\;\;\boldsymbolX' = (\boldsymbolX \odot \textrmtanh(\boldsymboly))_\boldsymboli; \;\;\;\;\boldsymbolA' = \boldsymbolA _ \boldsymboli, \boldsymboli

where \mjeqn \textrmrank(\boldsymboly, K) returns the indices of the top K values of \mjeqn\boldsymboly, and \mjeqn\boldsymbolp is a learnable parameter vector of size \mjeqnF. \mjeqnK is defined for each graph as a fraction of the number of nodes. Note that the the gating operation \mjeqn\textrmtanh(\boldsymboly) (Cangea et al.) can be replaced with a sigmoid (Gao & Ji).

This layer temporarily makes the adjacency matrix dense in order to compute \mjeqn\boldsymbolA'. If memory is not an issue, considerable speedups can be achieved by using dense graphs directly. Converting a graph from sparse to dense and back to sparse is an expensive operation.

Input

Output

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
layer_top_k_pool(
  object,
  ratio,
  return_mask = FALSE,
  sigmoid_gating = FALSE,
  kernel_initializer = "glorot_uniform",
  kernel_regularizer = NULL,
  kernel_constraint = NULL,
  ...
)

Arguments

ratio

float between 0 and 1, ratio of nodes to keep in each graph

return_mask

boolean, whether to return the binary mask used for pooling

sigmoid_gating

boolean, use a sigmoid gating activation instead of a tanh

kernel_initializer

initializer for the weights

kernel_regularizer

regularization applied to the weights

kernel_constraint

constraint applied to the weights


rdinnager/rspektral documentation built on June 12, 2021, 1:26 a.m.