torch_geometric.nn.dense.dense_diff_pool
- dense_diff_pool(x: Tensor, adj: Tensor, s: Tensor, mask: Optional[Tensor] = None, normalize: bool = True) Tuple[Tensor, Tensor, Tensor, Tensor][source]
来自“Hierarchical Graph Representation Learning with Differentiable Pooling”论文的可微分池化操作符。
\[ \begin{align}\begin{aligned}\mathbf{X}^{\prime} &= {\mathrm{softmax}(\mathbf{S})}^{\top} \cdot \mathbf{X}\\\mathbf{A}^{\prime} &= {\mathrm{softmax}(\mathbf{S})}^{\top} \cdot \mathbf{A} \cdot \mathrm{softmax}(\mathbf{S})\end{aligned}\end{align} \]基于密集学习分配 \(\mathbf{S} \in \mathbb{R}^{B \times N \times C}\)。 返回池化的节点特征矩阵、粗化的邻接矩阵和 两个辅助目标:(1) 链接预测损失
\[\mathcal{L}_{LP} = {\| \mathbf{A} - \mathrm{softmax}(\mathbf{S}) {\mathrm{softmax}(\mathbf{S})}^{\top} \|}_F,\]和 (2) 熵正则化
\[\mathcal{L}_E = \frac{1}{N} \sum_{n=1}^N H(\mathbf{S}_n).\]- Parameters:
x (torch.Tensor) – Node feature tensor \(\mathbf{X} \in \mathbb{R}^{B \times N \times F}\), with batch-size \(B\), (maximum) number of nodes \(N\) for each graph, and feature dimension \(F\).
adj (torch.Tensor) – Adjacency tensor \(\mathbf{A} \in \mathbb{R}^{B \times N \times N}\).
s (torch.Tensor) – 分配张量 \(\mathbf{S} \in \mathbb{R}^{B \times N \times C}\) 具有聚类数 \(C\)。 由于在此方法中执行了softmax,因此不需要事先应用softmax。
mask (torch.Tensor, optional) – Mask matrix \(\mathbf{M} \in {\{ 0, 1 \}}^{B \times N}\) indicating the valid nodes for each graph. (default:
None)normalize (bool, 可选) – 如果设置为
False,则链接预测损失不会除以adj.numel()。 (默认:True)
- Return type: