GCN2Conv

class dgl.nn.pytorch.conv.GCN2Conv(in_feats, layer, alpha=0.1, lambda_=1, project_initial_features=True, allow_zero_in_degree=False, bias=True, activation=None)[source]

Bases: Module

通过初始残差和恒等映射的图卷积网络(GCNII)来自简单且深的图卷积网络

它在数学上定义如下:

\[\mathbf{h}^{(l+1)} =\left( (1 - \alpha)(\mathbf{D}^{-1/2} \mathbf{\hat{A}} \mathbf{D}^{-1/2})\mathbf{h}^{(l)} + \alpha {\mathbf{h}^{(0)}} \right) \left( (1 - \beta_l) \mathbf{I} + \beta_l \mathbf{W} \right)\]

其中 \(\mathbf{\hat{A}}\) 是带有自环的邻接矩阵, \(\mathbf{D}_{ii} = \sum_{j=0} \mathbf{A}_{ij}\) 是其对角度矩阵, \(\mathbf{h}^{(0)}\) 是初始节点特征, \(\mathbf{h}^{(l)}\) 是第 \(l\) 层的特征, \(\alpha\) 是初始节点特征的比例, \(\beta_l\) 是用于调整恒等映射强度的超参数。 它由 \(\beta_l = \log(\frac{\lambda}{l}+1)\approx\frac{\lambda}{l}\) 定义, 其中 \(\lambda\) 是一个超参数。\(\beta\) 确保权重矩阵的衰减随着我们堆叠更多层而自适应地增加。

Parameters:
  • in_feats (int) – Input feature size; i.e, the number of dimensions of \(h_j^{(l)}\).

  • layer (int) – 当前层的索引。

  • alpha (float) – 初始输入特征的比例。默认值:0.1

  • lambda (float) – 用于确保权重矩阵衰减自适应增加的超参数。默认值:1

  • project_initial_features (bool) – 是否在初始特征和平滑特征之间共享权重矩阵。默认值:True

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True.

  • activation (callable activation function/layer or None, optional) – If not None, applies an activation function to the updated node features. Default: None.

  • allow_zero_in_degree (bool, optional) – If there are 0-in-degree nodes in the graph, output for those nodes will be invalid since no message will be passed to those nodes. This is harmful for some applications causing silent performance regression. This module will raise a DGLError if it detects 0-in-degree nodes in input graph. By setting True, it will suppress the check and let the users handle it by themselves. Default: False.

注意

零入度节点将导致无效的输出值。这是因为没有消息会传递到这些节点,聚合函数将在空输入上应用。避免这种情况的常见做法是如果图是同质的,则为每个节点添加自环,这可以通过以下方式实现:

>>> g = ... # a DGLGraph
>>> g = dgl.add_self_loop(g)

Calling add_self_loop will not work for some graphs, for example, heterogeneous graph since the edge type can not be decided for self_loop edges. Set allow_zero_in_degree to True for those cases to unblock the code and handle zero-in-degree nodes manually. A common practise to handle this is to filter out the nodes with zero-in-degree when use after conv.

示例

>>> import dgl
>>> import numpy as np
>>> import torch as th
>>> from dgl.nn import GCN2Conv
>>> # Homogeneous graph
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>> feat = th.ones(6, 3)
>>> g = dgl.add_self_loop(g)
>>> conv1 = GCN2Conv(3, layer=1, alpha=0.5, \
...         project_initial_features=True, allow_zero_in_degree=True)
>>> conv2 = GCN2Conv(3, layer=2, alpha=0.5, \
...         project_initial_features=True, allow_zero_in_degree=True)
>>> res = feat
>>> res = conv1(g, res, feat)
>>> res = conv2(g, res, feat)
>>> print(res)
tensor([[1.3803, 3.3191, 2.9572],
        [1.3803, 3.3191, 2.9572],
        [1.3803, 3.3191, 2.9572],
        [1.4770, 3.8326, 3.2451],
        [1.3623, 3.2102, 2.8679],
        [1.3803, 3.3191, 2.9572]], grad_fn=<AddBackward0>)
forward(graph, feat, feat_0, edge_weight=None)[source]

Description

计算图卷积。

param graph:

图表。

type graph:

DGLGraph

param feat:

输入特征的形状为 \((N, D_{in})\) 其中 \(D_{in}\) 是输入特征的大小,\(N\) 是节点的数量。

type feat:

torch.Tensor

param feat_0:

形状的初始特征 \((N, D_{in})\)

type feat_0:

torch.Tensor

param edge_weight:

在消息传递过程中使用的edge_weight。这相当于在上述方程中使用加权邻接矩阵,并且 \(\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2}\) 是基于dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm

type edge_weight:

torch.Tensor, 可选的

returns:

输出特征

rtype:

torch.Tensor

raises DGLError:

If there are 0-in-degree nodes in the input graph, it will raise DGLError since no message will be passed to those nodes. This will cause invalid output. The error can be ignored by setting allow_zero_in_degree parameter to True.

注意

  • 输入形状:\((N, *, \text{in_feats})\) 其中 * 表示任意数量的额外维度,\(N\) 是节点的数量。

  • 输出形状:\((N, *, \text{out_feats})\),其中除了最后一个维度外,其他维度的形状与输入相同。

  • 权重形状:\((\text{in_feats}, \text{out_feats})\).

reset_parameters()[source]

Description

重新初始化可学习的参数。