GATConv

class dgl.nn.pytorch.conv.GATConv(in_feats, out_feats, num_heads, feat_drop=0.0, attn_drop=0.0, negative_slope=0.2, residual=False, activation=None, allow_zero_in_degree=False, bias=True)[source]

Bases: Module

图注意力层来自 Graph Attention Network

\[h_i^{(l+1)} = \sum_{j\in \mathcal{N}(i)} \alpha_{i,j} W^{(l)} h_j^{(l)}\]

where \(\alpha_{ij}\) is the attention score bewteen node \(i\) and node \(j\):

\[ \begin{align}\begin{aligned}\alpha_{ij}^{l} &= \mathrm{softmax_i} (e_{ij}^{l})\\e_{ij}^{l} &= \mathrm{LeakyReLU}\left(\vec{a}^T [W h_{i} \| W h_{j}]\right)\end{aligned}\end{align} \]
Parameters:
  • in_feats (int, 或 一对 整数) – 输入特征大小;即 \(h_i^{(l)}\) 的维度数。 GATConv 可以应用于同构图和单向 二分图。 如果该层要应用于单向二分图,in_feats 指定源节点和目标节点的输入特征大小。如果 给定一个标量,源节点和目标节点的特征大小将取相同的值。

  • out_feats (int) – Output feature size; i.e, the number of dimensions of \(h_i^{(l+1)}\).

  • num_heads (int) – Number of heads in Multi-Head Attention.

  • feat_drop (float, optional) – Dropout rate on feature. Defaults: 0.

  • attn_drop (float, optional) – Dropout rate on attention weight. Defaults: 0.

  • negative_slope (float, optional) – LeakyReLU angle of negative slope. Defaults: 0.2.

  • residual (bool, optional) – If True, use residual connection. Defaults: False.

  • activation (callable activation function/layer or None, optional.) – If not None, applies an activation function to the updated node features. Default: None.

  • allow_zero_in_degree (bool, optional) – If there are 0-in-degree nodes in the graph, output for those nodes will be invalid since no message will be passed to those nodes. This is harmful for some applications causing silent performance regression. This module will raise a DGLError if it detects 0-in-degree nodes in input graph. By setting True, it will suppress the check and let the users handle it by themselves. Defaults: False.

  • bias (bool, optional) – If True, learns a bias term. Defaults: True.

注意

零入度节点将导致无效的输出值。这是因为没有消息会传递到这些节点,聚合函数将在空输入上应用。避免这种情况的常见做法是如果图是同质的,则为每个节点添加自环,这可以通过以下方式实现:

>>> g = ... # a DGLGraph
>>> g = dgl.add_self_loop(g)

Calling add_self_loop will not work for some graphs, for example, heterogeneous graph since the edge type can not be decided for self_loop edges. Set allow_zero_in_degree to True for those cases to unblock the code and handle zero-in-degree nodes manually. A common practise to handle this is to filter out the nodes with zero-in-degree when use after conv.

示例

>>> import dgl
>>> import numpy as np
>>> import torch as th
>>> from dgl.nn import GATConv
>>> # Case 1: Homogeneous graph
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>> g = dgl.add_self_loop(g)
>>> feat = th.ones(6, 10)
>>> gatconv = GATConv(10, 2, num_heads=3)
>>> res = gatconv(g, feat)
>>> res
tensor([[[ 3.4570,  1.8634],
        [ 1.3805, -0.0762],
        [ 1.0390, -1.1479]],
        [[ 3.4570,  1.8634],
        [ 1.3805, -0.0762],
        [ 1.0390, -1.1479]],
        [[ 3.4570,  1.8634],
        [ 1.3805, -0.0762],
        [ 1.0390, -1.1479]],
        [[ 3.4570,  1.8634],
        [ 1.3805, -0.0762],
        [ 1.0390, -1.1479]],
        [[ 3.4570,  1.8634],
        [ 1.3805, -0.0762],
        [ 1.0390, -1.1479]],
        [[ 3.4570,  1.8634],
        [ 1.3805, -0.0762],
        [ 1.0390, -1.1479]]], grad_fn=<BinaryReduceBackward>)
>>> # Case 2: Unidirectional bipartite graph
>>> u = [0, 1, 0, 0, 1]
>>> v = [0, 1, 2, 3, 2]
>>> g = dgl.heterograph({('A', 'r', 'B'): (u, v)})
>>> u_feat = th.tensor(np.random.rand(2, 5).astype(np.float32))
>>> v_feat = th.tensor(np.random.rand(4, 10).astype(np.float32))
>>> gatconv = GATConv((5,10), 2, 3)
>>> res = gatconv(g, (u_feat, v_feat))
>>> res
tensor([[[-0.6066,  1.0268],
        [-0.5945, -0.4801],
        [ 0.1594,  0.3825]],
        [[ 0.0268,  1.0783],
        [ 0.5041, -1.3025],
        [ 0.6568,  0.7048]],
        [[-0.2688,  1.0543],
        [-0.0315, -0.9016],
        [ 0.3943,  0.5347]],
        [[-0.6066,  1.0268],
        [-0.5945, -0.4801],
        [ 0.1594,  0.3825]]], grad_fn=<BinaryReduceBackward>)
forward(graph, feat, edge_weight=None, get_attention=False)[source]

Description

计算图注意力网络层。

param graph:

图表。

type graph:

DGLGraph

param feat:

If a torch.Tensor is given, the input feature of shape \((N, *, D_{in})\) where \(D_{in}\) is size of input feature, \(N\) is the number of nodes. If a pair of torch.Tensor is given, the pair must contain two tensors of shape \((N_{in}, *, D_{in_{src}})\) and \((N_{out}, *, D_{in_{dst}})\).

type feat:

torch.Tensor 或一对 torch.Tensor

param edge_weight:

一维张量的边权重值。形状:\((|E|,)\)

type edge_weight:

torch.Tensor, 可选的

param get_attention:

是否返回注意力值。默认为False。

type get_attention:

布尔值,可选

returns:
  • torch.Tensor – The output feature of shape \((N, *, H, D_{out})\) where \(H\) is the number of heads, and \(D_{out}\) is size of output feature.

  • torch.Tensor, 可选 – 形状为 \((E, *, H, 1)\) 的注意力值,其中 \(E\) 是边的数量。仅当 get_attentionTrue 时返回。

raises DGLError:

If there are 0-in-degree nodes in the input graph, it will raise DGLError since no message will be passed to those nodes. This will cause invalid output. The error can be ignored by setting allow_zero_in_degree parameter to True.

reset_parameters()[source]

Description

重新初始化可学习的参数。

注意

The fc weights \(W^{(l)}\) are initialized using Glorot uniform initialization. The attention weights are using xavier initialization method.