A DiffPool layer as presented by Ying et al. (2018).
Mode: batch.
This layer computes a soft clustering \mjeqn\boldsymbolS of the input graphs using a GNN, and reduces graphs as follows:
\mjdeqn\boldsymbolS = \textrmGNN(\boldsymbolA, \boldsymbolX); \\boldsymbolA' = \boldsymbolS^\top \boldsymbolA \boldsymbolS; \boldsymbolX' = \boldsymbolS^\top \boldsymbolX;
where GNN consists of one GraphConv layer with softmax activation. Two auxiliary loss terms are also added to the model: the link prediction loss \mjdeqn\big\| \boldsymbolA - \boldsymbolS\S^\top \big\| _ F and the entropy loss \mjdeqn- \frac1N \sum\limits_i = 1^N \boldsymbolS \log (\boldsymbolS).
The layer also applies a 1-layer GCN to the input features, and returns
the updated graph signal (the number of output channels is controlled by
the channels
parameter).
The layer can be used without a supervised loss, to compute node clustering
simply by minimizing the two auxiliary losses.
Input
Node features of shape ([batch], N, F)
;
Binary adjacency matrix of shape ([batch], N, N)
;
Output
Reduced node features of shape ([batch], K, channels)
;
Reduced adjacency matrix of shape ([batch], K, K)
;
If return_mask=True
, the soft clustering matrix of shape ([batch], N, K)
.
1 2 3 4 5 6 7 8 9 10 11 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.