A gated graph convolutional layer as presented by Li et al. (2018).
Mode: single, disjoint.
This layer expects a sparse adjacency matrix.
This layer repeatedly applies a GRU cell \mjeqnL times to the node attributes \mjdeqn \beginalign & h^(0) _ i = X_i \| \mathbf0 \ & m^(l) _ i = \sum\limits_j \in \mathcalN(i) h^(l - 1) _ j W \ & h^(l) _ i = \textrmGRU \left(m^(l) _ i, h^(l - 1) _ i \right) \ & Z_i = h^(L) _ i \endalign where \mjeqn\textrmGRU is the GRU cell.
Input
Node features of shape (N, F)
; note that F
must be smaller or equal
than channels
.
Binary adjacency matrix of shape (N, N)
.
Output
Node features with the same shape of the input, but the last dimension
changed to channels
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
channels |
integer, number of output channels |
n_layers |
integer, number of iterations with the GRU cell |
activation |
activation function to use |
use_bias |
bool, add a bias vector to the output |
kernel_initializer |
initializer for the weights |
bias_initializer |
initializer for the bias vector |
kernel_regularizer |
regularization applied to the weights |
bias_regularizer |
regularization applied to the bias vector |
activity_regularizer |
regularization applied to the output |
kernel_constraint |
constraint applied to the weights |
bias_constraint |
constraint applied to the bias vector. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.