A minCUT pooling layer as presented by Bianchi et al. (2019).
Mode: batch.
This layer computes a soft clustering \mjeqn\boldsymbolS of the input graphs using a MLP, and reduces graphs as follows:
\mjdeqn\boldsymbolS = \textrmMLP(\boldsymbolX); \\boldsymbolA' = \boldsymbolS^\top \boldsymbolA \boldsymbolS; \boldsymbolX' = \boldsymbolS^\top \boldsymbolX;
where MLP is a multi-layer perceptron with softmax output. Two auxiliary loss terms are also added to the model: the minCUT loss \mjdeqn- \frac \mathrmTr(\boldsymbolS^\top \boldsymbolA \boldsymbolS) \mathrmTr(\boldsymbolS^\top \boldsymbolD \boldsymbolS) and the orthogonality loss \mjdeqn\left\|\frac\boldsymbolS^\top \boldsymbolS\| \boldsymbolS^\top \boldsymbolS \| _ F- \frac\boldsymbolI _ K\sqrtK\right\| _ F.
The layer can be used without a supervised loss, to compute node clustering simply by minimizing the two auxiliary losses.
Input
Node features of shape ([batch], N, F)
;
Binary adjacency matrix of shape ([batch], N, N)
;
Output
Reduced node features of shape ([batch], K, F)
;
Reduced adjacency matrix of shape ([batch], K, K)
;
If return_mask=True
, the soft clustering matrix of shape ([batch], N, K)
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | layer_min_cut_pool(
object,
k,
mlp_hidden = NULL,
mlp_activation = "relu",
return_mask = FALSE,
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
...
)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.