A graph convolutional layer implementing the APPNP operator, as presented by Klicpera et al. (2019).
This layer computes: \mjdeqn Z^(0) = \textrmMLP(X); \ Z^(K) = (1 - \alpha) \hat D^-1/2 \hat A \hat D^-1/2 Z^(K - 1) + \alpha Z^(0), where \mjeqn\alpha is the teleport probability and \mjeqn\textrmMLP is a multi-layer perceptron.
Mode: single, disjoint, mixed, batch.
Input
Node features of shape ([batch], N, F)
;
Modified Laplacian of shape ([batch], N, N)
; can be computed with
spektral.utils.convolution.localpooling_filter
.
Output
Node features with the same shape as the input, but with the last
dimension changed to channels
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | layer_appnp(
object,
channels,
alpha = 0.2,
propagations = 1,
mlp_hidden = NULL,
mlp_activation = "relu",
dropout_rate = 0,
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
...
)
|
channels |
number of output channels |
alpha |
teleport probability during propagation |
propagations |
number of propagation steps |
mlp_hidden |
list of integers, number of hidden units for each hidden layer in the MLP (if None, the MLP has only the output layer) |
mlp_activation |
activation for the MLP layers |
dropout_rate |
dropout rate for Laplacian and MLP layers |
activation |
activation function to use |
use_bias |
bool, add a bias vector to the output |
kernel_initializer |
initializer for the weights |
bias_initializer |
initializer for the bias vector |
kernel_regularizer |
regularization applied to the weights |
bias_regularizer |
regularization applied to the bias vector |
activity_regularizer |
regularization applied to the output |
kernel_constraint |
constraint applied to the weights |
bias_constraint |
constraint applied to the bias vector. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.