An edge-conditioned convolutional layer (ECC) as presented by Simonovsky & Komodakis (2017).
Mode: single, disjoint, batch.
Notes:
This layer expects dense inputs and self-loops when working in batch mode.
In single mode, if the adjacency matrix is dense it will be converted to a SparseTensor automatically (which is an expensive operation).
For each node \mjeqn i , this layer computes: \mjdeqn Z_i = X_i W_\textrmroot + \sum\limits_j \in \mathcalN(i) X_j \textrmMLP(E_ji) + b where \mjeqn\textrmMLP is a multi-layer perceptron that outputs an edge-specific weight as a function of edge attributes.
Input
Node features of shape ([batch], N, F)
;
Binary adjacency matrices of shape ([batch], N, N)
;
Edge features. In single mode, shape (num_edges, S)
; in batch mode, shape
(batch, N, N, S)
.
Output
node features with the same shape of the input, but the last dimension
changed to channels
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | layer_edge_conditioned_conv(
object,
channels,
kernel_network = NULL,
root = TRUE,
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
...
)
|
channels |
integer, number of output channels |
kernel_network |
a list of integers representing the hidden neurons of the kernel-generating network |
root |
NA |
activation |
activation function to use |
use_bias |
bool, add a bias vector to the output |
kernel_initializer |
initializer for the weights |
bias_initializer |
initializer for the bias vector |
kernel_regularizer |
regularization applied to the weights |
bias_regularizer |
regularization applied to the bias vector |
activity_regularizer |
regularization applied to the output |
kernel_constraint |
constraint applied to the weights |
bias_constraint |
constraint applied to the bias vector. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.