torch_geometric.nn.conv.SSGConv
- class SSGConv(in_channels: int, out_channels: int, alpha: float, K: int = 1, cached: bool = False, add_self_loops: bool = True, bias: bool = True, **kwargs)[source]
Bases:
MessagePassing来自“简单谱图卷积”论文的简单谱图卷积算子。
\[\mathbf{X}^{\prime} = \frac{1}{K} \sum_{k=1}^K\left((1-\alpha) {\left(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \right)}^k \mathbf{X}+\alpha \mathbf{X}\right) \mathbf{\Theta},\]其中 \(\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}\) 表示插入自环的邻接矩阵,\(\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}\) 是其对角度矩阵。邻接矩阵可以包含除
1以外的其他值,通过可选的edge_weight张量表示边的权重。SSGConv是SGConv的改进算子,通过引入alpha参数来解决过度平滑问题。- Parameters:
in_channels (int) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method.out_channels (int) – Size of each output sample.
alpha (float) – 传送概率 \(\alpha \in [0, 1]\).
K (int, optional) – 跳数 \(K\). (默认:
1)cached (bool, optional) – 如果设置为
True,该层将在第一次执行时缓存 \(\frac{1}{K} \sum_{k=1}^K\left((1-\alpha) {\left(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \right)}^k \mathbf{X}+ \alpha \mathbf{X}\right)\) 的计算结果,并在后续执行中使用缓存版本。 此参数应仅在传导学习场景中设置为True。(默认值:False)add_self_loops (bool, optional) – If set to
False, will not add self-loops to the input graph. (default:True)bias (bool, optional) – If set to
False, the layer will not learn an additive bias. (default:True)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing.
- Shapes:
input: node features \((|\mathcal{V}|, F_{in})\), edge indices \((2, |\mathcal{E}|)\), edge weights \((|\mathcal{E}|)\) (optional)
输出: 节点特征 \((|\mathcal{V}|, F_{out})\)