torch_geometric.nn.conv.SuperGATConv
- class SuperGATConv(in_channels: int, out_channels: int, heads: int = 1, concat: bool = True, negative_slope: float = 0.2, dropout: float = 0.0, add_self_loops: bool = True, bias: bool = True, attention_type: str = 'MX', neg_sample_ratio: float = 0.5, edge_sample_ratio: float = 1.0, is_undirected: bool = False, **kwargs)[source]
Bases:
MessagePassing来自“如何找到你的友好邻居:自监督图注意力设计”论文的自监督图注意力操作符。
\[\mathbf{x}^{\prime}_i = \alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i} + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j},\]其中两种类型的注意力 \(\alpha_{i,j}^{\mathrm{MX\ or\ SD}}\) 计算如下:
\[ \begin{align}\begin{aligned}\alpha_{i,j}^{\mathrm{MX\ or\ SD}} &= \frac{ \exp\left(\mathrm{LeakyReLU}\left( e_{i,j}^{\mathrm{MX\ or\ SD}} \right)\right)} {\sum_{k \in \mathcal{N}(i) \cup \{ i \}} \exp\left(\mathrm{LeakyReLU}\left( e_{i,k}^{\mathrm{MX\ or\ SD}} \right)\right)}\\e_{i,j}^{\mathrm{MX}} &= \mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_j] \cdot \sigma \left( \left( \mathbf{\Theta}\mathbf{x}_i \right)^{\top} \mathbf{\Theta}\mathbf{x}_j \right)\\e_{i,j}^{\mathrm{SD}} &= \frac{ \left( \mathbf{\Theta}\mathbf{x}_i \right)^{\top} \mathbf{\Theta}\mathbf{x}_j }{ \sqrt{d} }\end{aligned}\end{align} \]自监督任务是使用注意力值作为输入来预测节点之间存在边的可能性\(\phi_{i,j}^{\mathrm{MX\ or\ SD}}\)的链接预测任务。
\[ \begin{align}\begin{aligned}\phi_{i,j}^{\mathrm{MX}} &= \sigma \left( \left( \mathbf{\Theta}\mathbf{x}_i \right)^{\top} \mathbf{\Theta}\mathbf{x}_j \right)\\\phi_{i,j}^{\mathrm{SD}} &= \sigma \left( \frac{ \left( \mathbf{\Theta}\mathbf{x}_i \right)^{\top} \mathbf{\Theta}\mathbf{x}_j }{ \sqrt{d} } \right)\end{aligned}\end{align} \]注意
有关使用SuperGAT的示例,请参见examples/super_gat.py。
- Parameters:
in_channels (int) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method.out_channels (int) – Size of each output sample.
heads (int, optional) – Number of multi-head-attentions. (default:
1)concat (bool, optional) – If set to
False, the multi-head attentions are averaged instead of concatenated. (default:True)negative_slope (float, optional) – LeakyReLU angle of the negative slope. (default:
0.2)dropout (float, optional) – Dropout probability of the normalized attention coefficients which exposes each node to a stochastically sampled neighborhood during training. (default:
0)add_self_loops (bool, optional) – If set to
False, will not add self-loops to the input graph. (default:True)bias (bool, optional) – If set to
False, the layer will not learn an additive bias. (default:True)attention_type (str, optional) – 使用的注意力类型 (
'MX','SD'). (默认:'MX')neg_sample_ratio (float, optional) – 采样的负边数量与正边数量的比例。 (默认:
0.5)edge_sample_ratio (float, optional) – 用于训练的样本数量占训练边数量的比例。(默认值:
1.0)is_undirected (bool, 可选) – 输入图是否为无向图。 如果未给出,将在执行负采样时根据输入图自动计算。(默认值:
False)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing.
- Shapes:
输入: 节点特征 \((|\mathcal{V}|, F_{in})\), 边索引 \((2, |\mathcal{E}|)\), 负边索引 \((2, |\mathcal{E}^{(-)}|)\) (可选)
输出: 节点特征 \((|\mathcal{V}|, H * F_{out})\)
- forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], neg_edge_index: Optional[Tensor] = None, batch: Optional[Tensor] = None) Tensor[source]
运行模块的前向传播。
- Parameters:
x (torch.Tensor) – The input node features.
edge_index (torch.Tensor or SparseTensor) – The edge indices.
neg_edge_index (torch.Tensor, optional) – The negative edges to train against. If not given, uses negative sampling to calculate negative edges. (default:
None)batch (torch.Tensor, optional) – 批次向量 \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), 它将 每个元素分配给特定的示例。 在小批量场景中动态采样负样本时使用。(默认:
None)
- Return type: