torch_geometric.nn.conv.APPNP
- class APPNP(K: int, alpha: float, dropout: float = 0.0, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, **kwargs)[source]
Bases:
MessagePassing神经预测层的近似个性化传播 来自“预测然后传播:图神经网络与个性化PageRank”论文。
\[ \begin{align}\begin{aligned}\mathbf{X}^{(0)} &= \mathbf{X}\\\mathbf{X}^{(k)} &= (1 - \alpha) \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \mathbf{X}^{(k-1)} + \alpha \mathbf{X}^{(0)}\\\mathbf{X}^{\prime} &= \mathbf{X}^{(K)},\end{aligned}\end{align} \]where \(\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}\) denotes the adjacency matrix with inserted self-loops and \(\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}\) its diagonal degree matrix. The adjacency matrix can include other values than
1representing edge weights via the optionaledge_weighttensor.- Parameters:
K (int) – 迭代次数 \(K\).
alpha (float) – 传送概率 \(\alpha\).
dropout (float, optional) – 训练期间边的丢弃概率。(默认值:
0)cached (bool, optional) – If set to
True, the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions. This parameter should only be set toTruein transductive learning scenarios. (default:False)add_self_loops (bool, optional) – If set to
False, will not add self-loops to the input graph. (default:True)normalize (bool, optional) – Whether to add self-loops and apply symmetric normalization. (default:
True)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing.
- Shapes:
input: node features \((|\mathcal{V}|, F)\), edge indices \((2, |\mathcal{E}|)\), edge weights \((|\mathcal{E}|)\) (optional)
output: node features \((|\mathcal{V}|, F)\)