View source: R/layers_conv.R View source: R/.layers.R
A Chebyshev convolutional layer as presented by Defferrard et al. (2016).
Mode: single, disjoint, mixed, batch.
This layer computes: \mjdeqn Z = \sum \limits_k=0^K - 1 T^(k) W^(k) + b^(k), where \mjeqn T^(0), ..., T^(K - 1) are Chebyshev polynomials of \mjeqn\tilde L defined as \mjdeqn T^(0) = X \ T^(1) = \tilde L X \ T^(k \ge 2) = 2 \cdot \tilde L T^(k - 1) - T^(k - 2), where \mjdeqn \tilde L = \frac2\lambda_max \cdot (I - D^-1/2 A D^-1/2) - I is the normalized Laplacian with a rescaled spectrum.
Input
Node features of shape ([batch], N, F)
;
A list of K Chebyshev polynomials of shape
[([batch], N, N), ..., ([batch], N, N)]
; can be computed with
spektral.utils.convolution.chebyshev_filter
.
Output
Node features with the same shape of the input, but with the last
dimension changed to channels
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
channels |
number of output channels |
K |
order of the Chebyshev polynomials |
activation |
activation function to use |
use_bias |
bool, add a bias vector to the output |
kernel_initializer |
initializer for the weights |
bias_initializer |
initializer for the bias vector |
kernel_regularizer |
regularization applied to the weights |
bias_regularizer |
regularization applied to the bias vector |
activity_regularizer |
regularization applied to the output |
kernel_constraint |
constraint applied to the weights |
bias_constraint |
constraint applied to the bias vector. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.