MetaPath2Vec
- class dgl.nn.pytorch.MetaPath2Vec(g, metapath, window_size, emb_dim=128, negative_size=5, sparse=True)[source]
Bases:
Modulemetapath2vec 模块来自 metapath2vec: 异构网络的可扩展表示学习
为了实现高效的优化,我们在训练过程中利用了负采样技术。对于元路径中的每个节点,我们将其视为中心节点,并在上下文大小内采样附近的正面节点,并从所有元路径的所有类型节点中抽取负样本。然后,我们可以使用中心-上下文配对节点和上下文-负配对节点来更新网络。
- Parameters:
g (DGLGraph) – 用于学习节点嵌入的图。不允许两种不同的规范边类型
(utype, etype, vtype)具有相同的etype。metapath (list[str]) – 一个字符串形式的边类型序列。它通过按顺序组合多个边类型来定义一个新的边类型。请注意,起始节点类型和结束节点类型通常是相同的。
window_size (int) – 在随机游走
w中,如果i - window_size <= j <= i + window_size,则节点w[j]被视为接近节点w[i]。emb_dim (int, optional) – Size of each embedding vector. Default: 128
negative_size (int, optional) – Number of negative samples to use for each positive sample. Default: 5
sparse (bool, optional) – If True, gradients with respect to the learnable weights will be sparse. Default: True
- node_embed
所有节点的嵌入表
- Type:
nn.Embedding
示例
>>> import torch >>> import dgl >>> from torch.optim import SparseAdam >>> from torch.utils.data import DataLoader >>> from dgl.nn.pytorch import MetaPath2Vec
>>> # Define a model >>> g = dgl.heterograph({ ... ('user', 'uc', 'company'): dgl.rand_graph(100, 1000).edges(), ... ('company', 'cp', 'product'): dgl.rand_graph(100, 1000).edges(), ... ('company', 'cu', 'user'): dgl.rand_graph(100, 1000).edges(), ... ('product', 'pc', 'company'): dgl.rand_graph(100, 1000).edges() ... }) >>> model = MetaPath2Vec(g, ['uc', 'cu'], window_size=1)
>>> # Use the source node type of etype 'uc' >>> dataloader = DataLoader(torch.arange(g.num_nodes('user')), batch_size=128, ... shuffle=True, collate_fn=model.sample) >>> optimizer = SparseAdam(model.parameters(), lr=0.025)
>>> for (pos_u, pos_v, neg_v) in dataloader: ... loss = model(pos_u, pos_v, neg_v) ... optimizer.zero_grad() ... loss.backward() ... optimizer.step()
>>> # Get the embeddings of all user nodes >>> user_nids = torch.LongTensor(model.local_to_global_nid['user']) >>> user_emb = model.node_embed(user_nids)