torch_geometric.nn.dense.dense_mincut_pool

dense_mincut_pool(x: Tensor, adj: Tensor, s: Tensor, mask: Optional[Tensor] = None, temp: float = 1.0) Tuple[Tensor, Tensor, Tensor, Tensor][source]

来自“图神经网络中的谱聚类用于图池化”论文的MinCut池化操作符。

\[ \begin{align}\begin{aligned}\mathbf{X}^{\prime} &= {\mathrm{softmax}(\mathbf{S})}^{\top} \cdot \mathbf{X}\\\mathbf{A}^{\prime} &= {\mathrm{softmax}(\mathbf{S})}^{\top} \cdot \mathbf{A} \cdot \mathrm{softmax}(\mathbf{S})\end{aligned}\end{align} \]

基于密集学习分配 \(\mathbf{S} \in \mathbb{R}^{B \times N \times C}\)。 返回池化的节点特征矩阵、粗化和对称归一化的邻接矩阵以及两个辅助目标:(1) MinCut 损失

\[\mathcal{L}_c = - \frac{\mathrm{Tr}(\mathbf{S}^{\top} \mathbf{A} \mathbf{S})} {\mathrm{Tr}(\mathbf{S}^{\top} \mathbf{D} \mathbf{S})}\]

其中 \(\mathbf{D}\) 是度矩阵,以及 (2) 正交性损失

\[\mathcal{L}_o = {\left\| \frac{\mathbf{S}^{\top} \mathbf{S}} {{\|\mathbf{S}^{\top} \mathbf{S}\|}_F} -\frac{\mathbf{I}_C}{\sqrt{C}} \right\|}_F.\]
Parameters:
  • x (torch.Tensor) – Node feature tensor \(\mathbf{X} \in \mathbb{R}^{B \times N \times F}\), with batch-size \(B\), (maximum) number of nodes \(N\) for each graph, and feature dimension \(F\).

  • adj (torch.Tensor) – Adjacency tensor \(\mathbf{A} \in \mathbb{R}^{B \times N \times N}\).

  • s (torch.Tensor) – Assignment tensor \(\mathbf{S} \in \mathbb{R}^{B \times N \times C}\) with number of clusters \(C\). The softmax does not have to be applied before-hand, since it is executed within this method.

  • mask (torch.Tensor, optional) – Mask matrix \(\mathbf{M} \in {\{ 0, 1 \}}^{B \times N}\) indicating the valid nodes for each graph. (default: None)

  • temp (float, optional) – 用于softmax函数的温度参数。 (默认值: 1.0)

Return type:

(torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor)