torch_geometric.nn.conv.NNConv

class NNConv(in_channels: Union[int, Tuple[int, int]], out_channels: int, nn: Callable, aggr: str = 'add', root_weight: bool = True, bias: bool = True, **kwargs)[source]

Bases: MessagePassing

来自“Neural Message Passing for Quantum Chemistry”论文的基于连续核的卷积操作符。

这种卷积也被称为边缘条件卷积,源自 “Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs” 论文(参见 torch_geometric.nn.conv.ECConv 作为别名):

\[\mathbf{x}^{\prime}_i = \mathbf{\Theta} \mathbf{x}_i + \sum_{j \in \mathcal{N}(i)} \mathbf{x}_j \cdot h_{\mathbf{\Theta}}(\mathbf{e}_{i,j}),\]

其中 \(h_{\mathbf{\Theta}}\) 表示一个神经网络, 一个多层感知器(MLP)。

Parameters:
  • in_channels (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

  • out_channels (int) – Size of each output sample.

  • nn (torch.nn.Module) – 一个神经网络 \(h_{\mathbf{\Theta}}\),它将形状为 [-1, num_edge_features] 的边缘特征 edge_attr 映射到形状为 [-1, in_channels * out_channels] 的输出,例如,由 torch.nn.Sequential 定义。

  • aggr (str, optional) – The aggregation scheme to use ("add", "mean", "max"). (default: "add")

  • root_weight (bool, 可选) – 如果设置为 False,该层将 不会将转换后的根节点特征添加到输出中。 (默认: True)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

Shapes:
  • input: node features \((|\mathcal{V}|, F_{in})\) or \(((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))\) if bipartite, edge indices \((2, |\mathcal{E}|)\), edge features \((|\mathcal{E}|, D)\) (optional)

  • output: node features \((|\mathcal{V}|, F_{out})\) or \((|\mathcal{V}_t|, F_{out})\) if bipartite

forward(x: Union[Tensor, Tuple[Tensor, Optional[Tensor]]], edge_index: Union[Tensor, SparseTensor], edge_attr: Optional[Tensor] = None, size: Optional[Tuple[int, int]] = None) Tensor[source]

运行模块的前向传播。

Return type:

Tensor

reset_parameters()[source]

重置模块的所有可学习参数。