GINConv
- class dgl.nn.pytorch.conv.GINConv(apply_func=None, aggregator_type='sum', init_eps=0, learn_eps=False, activation=None)[source]
Bases:
Module
图同构网络层来自图神经网络有多强大?
\[h_i^{(l+1)} = f_\Theta \left((1 + \epsilon) h_i^{l} + \mathrm{aggregate}\left(\left\{h_j^{l}, j\in\mathcal{N}(i) \right\}\right)\right)\]如果提供了每条边上的权重张量,加权图卷积定义为:
\[h_i^{(l+1)} = f_\Theta \left((1 + \epsilon) h_i^{l} + \mathrm{aggregate}\left(\left\{e_{ji} h_j^{l}, j\in\mathcal{N}(i) \right\}\right)\right)\]其中 \(e_{ji}\) 是从节点 \(j\) 到节点 \(i\) 的边的权重。 请确保 e_{ji} 与 h_j^{l} 是可广播的。
- Parameters:
apply_func (可调用的激活函数/层 或 None) – 如果不为None,将此函数应用于更新后的节点特征, 公式中的 \(f_\Theta\),默认值:None。
aggregator_type (str) – 使用的聚合器类型 (
sum
,max
或mean
), 默认: ‘sum’.init_eps (float, optional) – Initial \(\epsilon\) value, default:
0
.learn_eps (bool, optional) – If True, \(\epsilon\) will be a learnable parameter. Default:
False
.activation (callable activation function/layer or None, optional) – If not None, applies an activation function to the updated node features. Default:
None
.
示例
>>> import dgl >>> import numpy as np >>> import torch as th >>> from dgl.nn import GINConv >>> >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> feat = th.ones(6, 10) >>> lin = th.nn.Linear(10, 10) >>> conv = GINConv(lin, 'max') >>> res = conv(g, feat) >>> res tensor([[-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.1804, 0.0758, -0.5159, 0.3569, -0.1408, -0.1395, -0.2387, 0.7773, 0.5266, -0.4465]], grad_fn=<AddmmBackward>)
>>> # With activation >>> from torch.nn.functional import relu >>> conv = GINConv(lin, 'max', activation=relu) >>> res = conv(g, feat) >>> res tensor([[5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [2.5011, 0.0000, 0.0089, 2.0541, 0.8262, 0.0000, 0.0000, 0.1371, 0.0000, 0.0000]], grad_fn=<ReluBackward0>)
- forward(graph, feat, edge_weight=None)[source]
Description
计算图同构网络层。
- param graph:
图表。
- type graph:
DGLGraph
- param feat:
如果给定一个torch.Tensor,输入特征的形状为\((N, D_{in})\),其中 \(D_{in}\)是输入特征的大小,\(N\)是节点的数量。 如果给定一对torch.Tensor,这对张量必须包含两个形状为 \((N_{in}, D_{in})\)和\((N_{out}, D_{in})\)的张量。 如果
apply_func
不为None,\(D_{in}\)应该 符合apply_func
的输入维度要求。- type feat:
torch.Tensor 或一对 torch.Tensor
- param edge_weight:
边缘上的可选张量。如果提供,卷积将根据消息进行加权。
- type edge_weight:
torch.Tensor, 可选的
- returns:
输出特征的形状为 \((N, D_{out})\),其中 \(D_{out}\) 是
apply_func
的输出维度。 如果apply_func
为 None,\(D_{out}\) 应与输入维度相同。- rtype:
torch.Tensor