torch_geometric.nn.conv.NNConv
- class NNConv(in_channels: Union[int, Tuple[int, int]], out_channels: int, nn: Callable, aggr: str = 'add', root_weight: bool = True, bias: bool = True, **kwargs)[source]
Bases:
MessagePassing来自“Neural Message Passing for Quantum Chemistry”论文的基于连续核的卷积操作符。
这种卷积也被称为边缘条件卷积,源自 “Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs” 论文(参见
torch_geometric.nn.conv.ECConv作为别名):\[\mathbf{x}^{\prime}_i = \mathbf{\Theta} \mathbf{x}_i + \sum_{j \in \mathcal{N}(i)} \mathbf{x}_j \cdot h_{\mathbf{\Theta}}(\mathbf{e}_{i,j}),\]其中 \(h_{\mathbf{\Theta}}\) 表示一个神经网络,即 一个多层感知器(MLP)。
- Parameters:
in_channels (int or tuple) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.out_channels (int) – Size of each output sample.
nn (torch.nn.Module) – 一个神经网络 \(h_{\mathbf{\Theta}}\),它将形状为
[-1, num_edge_features]的边缘特征edge_attr映射到形状为[-1, in_channels * out_channels]的输出,例如,由torch.nn.Sequential定义。aggr (str, optional) – The aggregation scheme to use (
"add","mean","max"). (default:"add")root_weight (bool, 可选) – 如果设置为
False,该层将 不会将转换后的根节点特征添加到输出中。 (默认:True)bias (bool, optional) – If set to
False, the layer will not learn an additive bias. (default:True)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing.
- Shapes:
input: node features \((|\mathcal{V}|, F_{in})\) or \(((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))\) if bipartite, edge indices \((2, |\mathcal{E}|)\), edge features \((|\mathcal{E}|, D)\) (optional)
output: node features \((|\mathcal{V}|, F_{out})\) or \((|\mathcal{V}_t|, F_{out})\) if bipartite