torch_geometric.nn.conv.FiLMConv

class FiLMConv(in_channels: Union[int, Tuple[int, int]], out_channels: int, num_relations: int = 1, nn: Optional[Callable] = None, act: Optional[Callable] = ReLU(), aggr: str = 'mean', **kwargs)[source]

Bases: MessagePassing

来自“GNN-FiLM: 具有特征线性调制的图神经网络”论文的FiLM图卷积操作符。

\[\mathbf{x}^{\prime}_i = \sum_{r \in \mathcal{R}} \sum_{j \in \mathcal{N}(i)} \sigma \left( \boldsymbol{\gamma}_{r,i} \odot \mathbf{W}_r \mathbf{x}_j + \boldsymbol{\beta}_{r,i} \right)\]

其中 \(\boldsymbol{\beta}_{r,i}, \boldsymbol{\gamma}_{r,i} = g(\mathbf{x}_i)\),默认情况下 \(g\) 是一个单层线性层。自环会自动添加到输入图中,并表示为它自己的关系类型。

注意

有关使用FiLM的示例,请参见examples/gcn.py

Parameters:
  • in_channels (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

  • out_channels (int) – Size of each output sample.

  • num_relations (int, optional) – 关系的数量。(默认值:1

  • nn (torch.nn.Module, optional) – 将节点特征 x_i 从形状 [-1, in_channels] 映射到形状 [-1, 2 * out_channels] 的神经网络 \(g\)。如果设置为 None\(g\) 将被实现为单个线性层。(默认值:None

  • act (可调用的, 可选的) – 激活函数 \(\sigma\). (默认: torch.nn.ReLU())

  • aggr (str, optional) – The aggregation scheme to use ("add", "mean", "max"). (default: "mean")

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

Shapes:
  • 输入: 节点特征 \((|\mathcal{V}|, F_{in})\)\(((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))\) 如果是二分图, 边索引 \((2, |\mathcal{E}|)\), 边类型 \((|\mathcal{E}|)\)

  • 输出: 节点特征 \((|\mathcal{V}|, F_{out})\)\((|\mathcal{V_t}|, F_{out})\) 如果是二分图

forward(x: Union[Tensor, Tuple[Tensor, Tensor]], edge_index: Union[Tensor, SparseTensor], edge_type: Optional[Tensor] = None) Tensor[source]

运行模块的前向传播。

Return type:

Tensor

reset_parameters()[source]

重置模块的所有可学习参数。