TAGConv

class dgl.nn.pytorch.conv.TAGConv(in_feats, out_feats, k=2, bias=True, activation=None)[source]

Bases: Module

来自拓扑自适应图卷积网络的拓扑自适应图卷积层

\[H^{K} = {\sum}_{k=0}^K (D^{-1/2} A D^{-1/2})^{k} X {\Theta}_{k},\]

其中 \(A\) 表示邻接矩阵, \(D_{ii} = \sum_{j=0} A_{ij}\) 是其对角度矩阵, \({\Theta}_{k}\) 表示将不同跳数的结果相加的线性权重。

Parameters:
  • in_feats (int) – 输入特征大小。即,\(X\)的维度数。

  • out_feats (int) – 输出特征大小。即,\(H^{K}\) 的维度数。

  • k (int, optional) – 跳数 \(K\)。默认值:2

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True.

  • activation (callable activation function/layer or None, optional) – If not None, applies an activation function to the updated node features. Default: None.

lin

可学习的线性模块。

Type:

torch.Module

示例

>>> import dgl
>>> import numpy as np
>>> import torch as th
>>> from dgl.nn import TAGConv
>>>
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>> feat = th.ones(6, 10)
>>> conv = TAGConv(10, 2, k=2)
>>> res = conv(g, feat)
>>> res
tensor([[ 0.5490, -1.6373],
        [ 0.5490, -1.6373],
        [ 0.5490, -1.6373],
        [ 0.5513, -1.8208],
        [ 0.5215, -1.6044],
        [ 0.3304, -1.9927]], grad_fn=<AddmmBackward>)
forward(graph, feat, edge_weight=None)[source]

Description

计算拓扑自适应图卷积。

param graph:

图表。

type graph:

DGLGraph

param feat:

The input feature of shape \((N, D_{in})\) where \(D_{in}\) is size of input feature, \(N\) is the number of nodes.

type feat:

torch.Tensor

param edge_weight:

edge_weight to use in the message passing process. This is equivalent to using weighted adjacency matrix in the equation above, and \(\tilde{D}^{-1/2}\tilde{A} \tilde{D}^{-1/2}\) is based on dgl.nn.pytorch.conv.graphconv.EdgeWeightNorm.

type edge_weight:

torch.Tensor, 可选的

returns:

The output feature of shape \((N, D_{out})\) where \(D_{out}\) is size of output feature.

rtype:

torch.Tensor

reset_parameters()[source]

Description

重新初始化可学习的参数。

注意

模型参数使用Glorot均匀初始化进行初始化。