torch_geometric.nn.conv.SAGEConv
- class SAGEConv(in_channels: Union[int, Tuple[int, int]], out_channels: int, aggr: Optional[Union[str, List[str], Aggregation]] = 'mean', normalize: bool = False, root_weight: bool = True, project: bool = False, bias: bool = True, **kwargs)[source]
Bases:
MessagePassingThe GraphSAGE operator from the “Inductive Representation Learning on Large Graphs” paper.
\[\mathbf{x}^{\prime}_i = \mathbf{W}_1 \mathbf{x}_i + \mathbf{W}_2 \cdot \mathrm{mean}_{j \in \mathcal{N(i)}} \mathbf{x}_j\]如果
project = True,那么 \(\mathbf{x}_j\) 将首先通过以下方式投影\[\mathbf{x}_j \leftarrow \sigma ( \mathbf{W}_3 \mathbf{x}_j + \mathbf{b})\]如论文中的公式(3)所述。
- Parameters:
in_channels (int or tuple) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.out_channels (int) – Size of each output sample.
aggr (str 或 Aggregation, 可选) – 使用的聚合方案。 可以使用
torch_geometric.nn.aggr中的任何聚合方式, 例如,"mean","max", 或"lstm"。 (默认:"mean")normalize (bool, 可选) – 如果设置为
True,输出特征 将会被 \(\ell_2\)-归一化,即, \(\frac{\mathbf{x}^{\prime}_i} {\| \mathbf{x}^{\prime}_i \|_2}\)。 (默认:False)root_weight (bool, optional) – If set to
False, the layer will not add transformed root node features to the output. (default:True)项目 (bool, 可选) – 如果设置为
True,该层将在聚合之前应用线性变换和激活函数(如论文中的公式 (3) 所述)。 (默认:False)bias (bool, optional) – If set to
False, the layer will not learn an additive bias. (default:True)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing.
- Shapes:
inputs: node features \((|\mathcal{V}|, F_{in})\) or \(((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))\) if bipartite, edge indices \((2, |\mathcal{E}|)\)
outputs: node features \((|\mathcal{V}|, F_{out})\) or \((|\mathcal{V_t}|, F_{out})\) if bipartite