torch_geometric.nn.conv.GINConv

class GINConv(nn: Callable, eps: float = 0.0, train_eps: bool = False, **kwargs)[source]

Bases: MessagePassing

来自“图神经网络有多强大?”论文的图同构算子。

\[\mathbf{x}^{\prime}_i = h_{\mathbf{\Theta}} \left( (1 + \epsilon) \cdot \mathbf{x}_i + \sum_{j \in \mathcal{N}(i)} \mathbf{x}_j \right)\]

\[\mathbf{X}^{\prime} = h_{\mathbf{\Theta}} \left( \left( \mathbf{A} + (1 + \epsilon) \cdot \mathbf{I} \right) \cdot \mathbf{X} \right),\]

这里 \(h_{\mathbf{\Theta}}\) 表示一个神经网络, 一个多层感知器(MLP)。

Parameters:
  • nn (torch.nn.Module) – A neural network \(h_{\mathbf{\Theta}}\) that maps node features x of shape [-1, in_channels] to shape [-1, out_channels], e.g., defined by torch.nn.Sequential.

  • eps (float, optional) – (Initial) \(\epsilon\)-value. (default: 0.)

  • train_eps (bool, optional) – If set to True, \(\epsilon\) will be a trainable parameter. (default: False)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

Shapes:
  • 输入: 节点特征 \((|\mathcal{V}|, F_{in})\)\(((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))\) 如果是二分图, 边索引 \((2, |\mathcal{E}|)\)

  • output: node features \((|\mathcal{V}|, F_{out})\) or \((|\mathcal{V}_t|, F_{out})\) if bipartite

forward(x: Union[Tensor, Tuple[Tensor, Optional[Tensor]]], edge_index: Union[Tensor, SparseTensor], size: Optional[Tuple[int, int]] = None) Tensor[source]

运行模块的前向传播。

Return type:

Tensor

reset_parameters()[source]

重置模块的所有可学习参数。