torch_geometric.nn.conv.GCNConv
- class GCNConv(in_channels: int, out_channels: int, improved: bool = False, cached: bool = False, add_self_loops: Optional[bool] = None, normalize: bool = True, bias: bool = True, **kwargs)[source]
Bases:
MessagePassing图卷积操作符来自“基于图卷积网络的半监督分类”论文。
\[\mathbf{X}^{\prime} = \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \mathbf{X} \mathbf{\Theta},\]其中 \(\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}\) 表示插入自环的邻接矩阵, \(\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}\) 是其对角度矩阵。 邻接矩阵可以包含除
1以外的其他值,通过可选的edge_weight张量表示边的权重。其节点式公式如下:
\[\mathbf{x}^{\prime}_i = \mathbf{\Theta}^{\top} \sum_{j \in \mathcal{N}(i) \cup \{ i \}} \frac{e_{j,i}}{\sqrt{\hat{d}_j \hat{d}_i}} \mathbf{x}_j\]其中 \(\hat{d}_i = 1 + \sum_{j \in \mathcal{N}(i)} e_{j,i}\),其中 \(e_{j,i}\) 表示从源节点
j到目标节点i的边权重(默认值:1.0)- Parameters:
in_channels (int) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method.out_channels (int) – Size of each output sample.
改进 (bool, 可选) – 如果设置为
True,该层将计算 \(\mathbf{\hat{A}}\) 为 \(\mathbf{A} + 2\mathbf{I}\)。 (默认:False)cached (bool, optional) – If set to
True, the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions. This parameter should only be set toTruein transductive learning scenarios. (default:False)add_self_loops (bool, 可选) – 如果设置为
False,将不会向输入图中添加自环。默认情况下,如果normalize设置为True,则会添加自环,否则不会添加。(默认值:None)normalize (bool, optional) – Whether to add self-loops and compute symmetric normalization coefficients on-the-fly. (default:
True)bias (bool, optional) – If set to
False, the layer will not learn an additive bias. (default:True)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing.
- Shapes:
输入: 节点特征 \((|\mathcal{V}|, F_{in})\), 边索引 \((2, |\mathcal{E}|)\) 或稀疏矩阵 \((|\mathcal{V}|, |\mathcal{V}|)\), 边权重 \((|\mathcal{E}|)\) (可选)
output: node features \((|\mathcal{V}|, F_{out})\)