An Edge Convolutional layer as presented by Wang et al. (2018).
Mode: single, disjoint.
This layer expects a sparse adjacency matrix.
This layer computes for each node \mjeqni: \mjdeqn Z_i = \sum\limits_j \in \mathcalN(i) \textrmMLP\big( X_i \| X_j - X_i \big) where \mjeqn\textrmMLP is a multi-layer perceptron.
Input
Node features of shape (N, F)
;
Binary adjacency matrix of shape (N, N)
.
Output
Node features with the same shape of the input, but the last dimension
changed to channels
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | layer_edge_conv(
object,
channels,
mlp_hidden = NULL,
mlp_activation = "relu",
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
...
)
|
channels |
integer, number of output channels |
mlp_hidden |
list of integers, number of hidden units for each hidden layer in the MLP (if None, the MLP has only the output layer) |
mlp_activation |
activation for the MLP layers |
activation |
activation function to use |
use_bias |
bool, add a bias vector to the output |
kernel_initializer |
initializer for the weights |
bias_initializer |
initializer for the bias vector |
kernel_regularizer |
regularization applied to the weights |
bias_regularizer |
regularization applied to the bias vector |
activity_regularizer |
regularization applied to the output |
kernel_constraint |
constraint applied to the weights |
bias_constraint |
constraint applied to the bias vector. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.