View source: R/layers_conv.R View source: R/.layers.R View source: R/.layers.R
A graph convolutional layer with \mjeqn\mathrmARMA _ K filters, as presented by Bianchi et al. (2019).
Mode: single, disjoint, mixed, batch.
This layer computes: \mjdeqn Z = \frac1K \sum\limits_k=1^K \bar X_k^(T), where \mjeqnK is the order of the \mjeqn\mathrmARMA _ K filter, and where: \mjdeqn \bar X_k^(t + 1) = \sigma \left(\tilde L \bar X^(t) W^(t) + X V^(t) \right) is a recursive approximation of an \mjeqn\mathrmARMA _ 1 filter, where \mjeqn \bar X^(0) = X and \mjdeqn \tilde L = \frac2\lambda_max \cdot (I - D^-1/2 A D^-1/2) - I is the normalized Laplacian with a rescaled spectrum.
Input
Node features of shape ([batch], N, F)
;
Normalized and rescaled Laplacian of shape ([batch], N, N)
; can be
computed with spektral.utils.convolution.normalized_laplacian
and
spektral.utils.convolution.rescale_laplacian
.
Output
Node features with the same shape as the input, but with the last
dimension changed to channels
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | layer_arma_conv(
object,
channels,
order = 1,
iterations = 1,
share_weights = FALSE,
gcn_activation = "relu",
dropout_rate = 0,
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
...
)
|
channels |
number of output channels |
order |
order of the full ARMA\(_K\) filter, i.e., the number of parallel stacks in the layer |
iterations |
number of iterations to compute each ARMA\(_1\) approximation |
share_weights |
share the weights in each ARMA\(_1\) stack. |
gcn_activation |
activation function to use to compute each ARMA\(_1\) stack |
dropout_rate |
dropout rate for skip connection |
activation |
activation function to use |
use_bias |
bool, add a bias vector to the output |
kernel_initializer |
initializer for the weights |
bias_initializer |
initializer for the bias vector |
kernel_regularizer |
regularization applied to the weights |
bias_regularizer |
regularization applied to the bias vector |
activity_regularizer |
regularization applied to the output |
kernel_constraint |
constraint applied to the weights |
bias_constraint |
constraint applied to the bias vector. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.