CuGraphGATConv
- class dgl.nn.pytorch.conv.CuGraphGATConv(in_feats, out_feats, num_heads, feat_drop=0.0, negative_slope=0.2, residual=False, activation=None, bias=True)[source]
Bases:
CuGraphBaseConv
图注意力层来自图注意力网络,通过cugraph-ops加速了稀疏聚合。
请参阅
dgl.nn.pytorch.conv.GATConv
以获取数学模型。This module depends on
pylibcugraphops
package, which can be installed viaconda install -c nvidia pylibcugraphops=23.04
.pylibcugraphops
23.04 requires python 3.8.x or 3.10.x.注意
This is an experimental feature.
- Parameters:
in_feats (int) – Input feature size.
out_feats (int) – Output feature size.
num_heads (int) – Number of heads in Multi-Head Attention.
feat_drop (float, optional) – Dropout rate on feature. Defaults:
0
.negative_slope (float, optional) – LeakyReLU angle of negative slope. Defaults:
0.2
.residual (bool, optional) – If True, use residual connection. Defaults:
False
.activation (callable activation function/layer or None, optional.) – If not None, applies an activation function to the updated node features. Default:
None
.bias (bool, optional) – If True, learns a bias term. Defaults:
True
.
示例
>>> import dgl >>> import torch >>> from dgl.nn import CuGraphGATConv >>> device = 'cuda' >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])).to(device) >>> g = dgl.add_self_loop(g) >>> feat = torch.ones(6, 10).to(device) >>> conv = CuGraphGATConv(10, 2, num_heads=3).to(device) >>> res = conv(g, feat) >>> res tensor([[[ 0.2340, 1.9226], [ 1.6477, -1.9986], [ 1.1138, -1.9302]], [[ 0.2340, 1.9226], [ 1.6477, -1.9986], [ 1.1138, -1.9302]], [[ 0.2340, 1.9226], [ 1.6477, -1.9986], [ 1.1138, -1.9302]], [[ 0.2340, 1.9226], [ 1.6477, -1.9986], [ 1.1138, -1.9302]], [[ 0.2340, 1.9226], [ 1.6477, -1.9986], [ 1.1138, -1.9302]], [[ 0.2340, 1.9226], [ 1.6477, -1.9986], [ 1.1138, -1.9302]]], device='cuda:0', grad_fn=<ViewBackward0>)
- forward(g, feat, max_in_degree=None)[source]
前向计算。
- Parameters:
g (DGLGraph) – The graph.
特征 (torch.Tensor) – 输入特征的形状为 \((N, D_{in})\)。
max_in_degree (int) – Maximum in-degree of destination nodes. It is only effective when
g
is aDGLBlock
, i.e., bipartite graph. Wheng
is generated from a neighbor sampler, the value should be set to the correspondingfanout
. If not given,max_in_degree
will be calculated on-the-fly.
- Returns:
输出特征的形状为 \((N, H, D_{out})\),其中 \(H\) 是头的数量,\(D_{out}\) 是 输出特征的大小。
- Return type:
torch.Tensor