dgl.broadcast_nodes
- dgl.broadcast_nodes(graph, graph_feat, *, ntype=None)[source]
生成一个节点特征,该特征等于图级特征
graph_feat
。该操作类似于
numpy.repeat
(或torch.repeat_interleave
)。 它通常用于通过全局向量对节点特征进行归一化。例如, 将图中的节点特征归一化到范围 \([0~1)\):>>> g = dgl.batch([...]) # batch multiple graphs >>> g.ndata['h'] = ... # some node features >>> h_sum = dgl.broadcast_nodes(g, dgl.sum_nodes(g, 'h')) >>> g.ndata['h'] /= h_sum # normalize by summation
- Parameters:
- Returns:
节点特征张量的形状为 \((N, *)\),其中 \(N\) 是节点的数量。
- Return type:
张量
示例
>>> import dgl >>> import torch as th
Create two
DGLGraph
objects and initialize their node features.>>> g1 = dgl.graph(([0], [1])) # Graph 1 >>> g2 = dgl.graph(([0, 1], [1, 2])) # Graph 2 >>> bg = dgl.batch([g1, g2]) >>> feat = th.rand(2, 5) >>> feat tensor([[0.4325, 0.7710, 0.5541, 0.0544, 0.9368], [0.2721, 0.4629, 0.7269, 0.0724, 0.1014]])
将特征广播到批处理图中的所有节点,feat[i]被广播到批次中第i个示例的节点。
>>> dgl.broadcast_nodes(bg, feat) tensor([[0.4325, 0.7710, 0.5541, 0.0544, 0.9368], [0.4325, 0.7710, 0.5541, 0.0544, 0.9368], [0.2721, 0.4629, 0.7269, 0.0724, 0.1014], [0.2721, 0.4629, 0.7269, 0.0724, 0.1014], [0.2721, 0.4629, 0.7269, 0.0724, 0.1014]])
将特征广播到单个图中的所有节点(要广播的特征张量形状应为 \((1, *)\))。
>>> feat0 = th.unsqueeze(feat[0], 0) >>> dgl.broadcast_nodes(g1, feat0) tensor([[0.4325, 0.7710, 0.5541, 0.0544, 0.9368], [0.4325, 0.7710, 0.5541, 0.0544, 0.9368]])
另请参阅