torch_geometric.explain.algorithm.CaptumExplainer

class CaptumExplainer(attribution_method: Union[str, Any], **kwargs)[source]

Bases: ExplainerAlgorithm

一个基于Captum的解释器,用于识别在GNN预测中起关键作用的紧凑子图结构和节点特征。

这个解释器算法使用 Captum 来计算归因。

目前,支持以下归因方法:

  • captum.attr.IntegratedGradients

  • captum.attr.Saliency

  • captum.attr.InputXGradient

  • captum.attr.Deconvolution

  • captum.attr.ShapleyValueSampling

  • captum.attr.GuidedBackprop

Parameters:
  • attribution_method (Attributionstr) – 使用的Captum归因方法。可以是字符串或captum.attr方法。

  • **kwargs – Captum归因方法的附加参数。

forward(model: Module, x: Union[Tensor, Dict[str, Tensor]], edge_index: Union[Tensor, Dict[Tuple[str, str, str], Tensor]], *, target: Tensor, index: Optional[Union[int, Tensor]] = None, **kwargs) Union[Explanation, HeteroExplanation][source]

计算解释。

Parameters:
  • model (torch.nn.Module) – The model to explain.

  • x (Union[torch.Tensor, Dict[NodeType, torch.Tensor]]) – The input node features of a homogeneous or heterogeneous graph.

  • edge_index (Union[torch.Tensor, Dict[NodeType, torch.Tensor]]) – The input edge indices of a homogeneous or heterogeneous graph.

  • target (torch.Tensor) – The target of the model.

  • index (Union[int, Tensor], optional) – The index of the model output to explain. Can be a single index or a tensor of indices. (default: None)

  • **kwargs (optional) – Additional keyword arguments passed to model.

Return type:

Union[Explanation, HeteroExplanation]

supports() bool[source]

Checks if the explainer supports the user-defined settings provided in self.explainer_config, self.model_config.

Return type:

bool 翻译后的内容: bool 在这个例子中,`bool` 是一个Python函数名称,根据翻译规则1,不需要翻译。因此,翻译后的内容保持不变。