layer_gin_conv: GINConv

Description Usage Arguments

View source: R/layers_conv.R

Description

\loadmathjax

A Graph Isomorphism Network (GIN) as presented by Xu et al. (2018).

Mode: single, disjoint.

This layer expects a sparse adjacency matrix.

This layer computes for each node \mjeqni: \mjdeqn Z_i = \textrmMLP\big( (1 + \epsilon) \cdot X_i + \sum\limits_j \in \mathcalN(i) X_j \big) where \mjeqn\textrmMLP is a multi-layer perceptron.

Input

Output

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
layer_gin_conv(
  object,
  channels,
  epsilon = NULL,
  mlp_hidden = NULL,
  mlp_activation = "relu",
  activation = NULL,
  use_bias = TRUE,
  kernel_initializer = "glorot_uniform",
  bias_initializer = "zeros",
  kernel_regularizer = NULL,
  bias_regularizer = NULL,
  activity_regularizer = NULL,
  kernel_constraint = NULL,
  bias_constraint = NULL,
  ...
)

Arguments

channels

integer, number of output channels

epsilon

unnamed parameter, see Xu et al. (2018), and the equation above. By setting epsilon=None, the parameter will be learned (default behaviour). If given as a value, the parameter will stay fixed.

mlp_hidden

list of integers, number of hidden units for each hidden layer in the MLP (if None, the MLP has only the output layer)

mlp_activation

activation for the MLP layers

activation

activation function to use

use_bias

bool, add a bias vector to the output

kernel_initializer

initializer for the weights

bias_initializer

initializer for the bias vector

kernel_regularizer

regularization applied to the weights

bias_regularizer

regularization applied to the bias vector

activity_regularizer

regularization applied to the output

kernel_constraint

constraint applied to the weights

bias_constraint

constraint applied to the bias vector.


rdinnager/rspektral documentation built on June 12, 2021, 1:26 a.m.