dgl.from_scipy
- dgl.from_scipy(sp_mat, eweight_name=None, idtype=None, device=None)[source]
从SciPy稀疏矩阵创建图形并返回。
- Parameters:
sp_mat (scipy.sparse.spmatrix) – 图的邻接矩阵。每个非零条目
sp_mat[i, j]
表示从节点i
到节点j
的一条边。矩阵必须是方形的(N, N)
,其中N
是图中的节点数量。eweight_name (str, 可选) – 用于存储
sp_mat
非零值的edata名称。如果提供,DGL将把sp_mat
的非零值存储在返回图的edata[eweight_name]
中。idtype (int32 or int64, optional) – The data type for storing the structure-related graph information such as node and edge IDs. It should be a framework-specific data type object (e.g.,
torch.int32
). By default, DGL uses int64.device (device context, optional) – The device of the resulting graph. It should be a framework-specific device object (e.g.,
torch.device
). By default, DGL stores the graph on CPU.
- Returns:
创建的图表。
- Return type:
注释
The function supports all kinds of SciPy sparse matrix classes (e.g.,
scipy.sparse.csr.csr_matrix
). It converts the input matrix to the COOrdinate format usingscipy.sparse.spmatrix.tocoo()
before creates aDGLGraph
. Creating from ascipy.sparse.coo.coo_matrix
is hence the most efficient way.DGL internally maintains multiple copies of the graph structure in different sparse formats and chooses the most efficient one depending on the computation invoked. If memory usage becomes an issue in the case of large graphs, use
dgl.DGLGraph.formats()
to restrict the allowed formats.
示例
以下示例使用PyTorch后端。
>>> import dgl >>> import numpy as np >>> import torch >>> from scipy.sparse import coo_matrix
创建一个小的三边图。
>>> # Source nodes for edges (2, 1), (3, 2), (4, 3) >>> src_ids = np.array([2, 3, 4]) >>> # Destination nodes for edges (2, 1), (3, 2), (4, 3) >>> dst_ids = np.array([1, 2, 3]) >>> # Weight for edges (2, 1), (3, 2), (4, 3) >>> eweight = np.array([0.2, 0.3, 0.5]) >>> sp_mat = coo_matrix((eweight, (src_ids, dst_ids)), shape=(5, 5)) >>> g = dgl.from_scipy(sp_mat)
检索边的权重。
>>> g = dgl.from_scipy(sp_mat, eweight_name='w') >>> g.edata['w'] tensor([0.2000, 0.3000, 0.5000], dtype=torch.float64)
在第一个GPU上创建一个数据类型为int32的图形。
>>> g = dgl.from_scipy(sp_mat, idtype=torch.int32, device='cuda:0')
另请参阅