Applies Graph Diffusion Convolution as descibed by Li et al. (2016)
Mode: single, disjoint, mixed, batch.
This layer expects a dense adjacency matrix.
Given a number of diffusion steps \mjeqnK and a row normalized adjacency matrix \mjeqn\hat A , this layer calculates the q'th channel as: \mjdeqn \mathbfH _ ~:,~q = \sigma\left( \sum_f=1^F \left( \sum_k=0^K-1\theta_k \hat A^k \right) X_~:,~f \right)
Input
Node features of shape ([batch], N, F)
;
Normalized adjacency or attention coef. matrix \mjeqn\hat A of shape
([batch], N, N)
; Use DiffusionConvolution.preprocess
to normalize.
Output
Node features with the same shape as the input, but with the last
dimension changed to channels
.
1 2 3 4 5 6 7 8 9 10 | layer_diffusion_conv(
object,
channels,
num_diffusion_steps = 6,
kernel_initializer = "glorot_uniform",
kernel_regularizer = NULL,
kernel_constraint = NULL,
activation = "tanh",
...
)
|
channels |
number of output channels |
num_diffusion_steps |
How many diffusion steps to consider. \(K\) in paper. |
kernel_initializer |
initializer for the weights |
kernel_regularizer |
regularization applied to the weights |
kernel_constraint |
constraint applied to the weights |
activation |
activation function \(\sigma\) (\(\tanh\) by default) |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.