指标工具包¶
- torcheval.metrics.toolkit.classwise_converter(input: Tensor, name: str, labels: List[str] | None = None) Dict[str, Tensor]¶
将未平均的度量结果张量转换为字典,每个键为‘metricname_classlabel’,值为与该类相关的数据。
- Parameters:
输入 (torch.Tensor) – 沿其第一维度分割的张量。
name (str) – 指标的名称。
labels (List[str], Optional) – 可选的字符串列表,表示不同的类别。
- Raises:
ValueError – 当labels的长度不等于类别数量时。
- torcheval.metrics.toolkit.clone_metric(metric: Metric) Metric¶
返回一个从输入指标克隆的新指标实例。
- Parameters:
metric – 要克隆的度量对象
- Returns:
从克隆创建的新指标实例
- torcheval.metrics.toolkit.clone_metrics(metrics: _TMetrics) List[Metric]¶
返回从输入指标克隆的新指标实例列表。
- Parameters:
metrics – 要克隆的度量对象
- Returns:
克隆的指标实例列表
- torcheval.metrics.toolkit.get_synced_metric(metric: Metric, process_group: ProcessGroup | None = None, recipient_rank: int | Literal['all'] = 0) Metric | None¶
返回一个关于recipient_rank的度量对象,其内部状态变量在process_group中的进程之间同步。在非recipient rank上返回
None。如果
all作为recipient_rank传递,process_group中的所有等级都被视为接收者等级。- Parameters:
metric – 要同步的度量对象。
process_group – 收集指标状态的进程组。默认值:
None(整个进程组)recipient_rank – 目标排名。如果传入字符串“all”,则所有排名都是目标排名。
- Raises:
ValueError – 当
recipient_rank不是整数或字符串“all”时。
示例:
>>> # Assumes world_size of 3. >>> # Process group initialization omitted on each rank. >>> import torch >>> import torch.distributed as dist >>> from torcheval import Max >>> max = Max() >>> max.update(torch.tensor(dist.get_rank())).compute() tensor(0.) # Rank 0 tensor(1.) # Rank 1 tensor(2.) # Rank 2 >>> synced_metric = get_synced_metric(max) # by default sync metric states to Rank 0 >>> synced_metric.compute() if synced_metric else None tensor(2.) # Rank 0 None # Rank 1 -- synced_metric is None None # Rank 2 -- synced_metric is None >>> synced_metric = get_synced_metric(max, recipient_rank=1) >>> synced_metric.compute() if synced_metric else None None # Rank 0 -- synced_metric is None tensor(2.) # Rank 1 None # Rank 2 -- synced_metric is None >>> get_synced_metric(max, recipient_rank="all").compute() tensor(2.) # Rank 0 tensor(2.) # Rank 1 tensor(2.) # Rank 2
- torcheval.metrics.toolkit.get_synced_metric_collection(metric_collection: MutableMapping[str, Metric], process_group: ProcessGroup | None = None, recipient_rank: int | Literal['all'] = 0) Dict[str, Metric] | None | MutableMapping[str, Metric]¶
返回一个包含度量对象的字典给recipient_rank,其内部状态变量在process_group中的进程之间同步。在非recipient_rank上返回
None。数据传输是批量进行的,以最大限度地提高效率。
如果将
all作为recipient_rank传递,则process_group中的所有等级都将被视为接收者等级。- Parameters:
metric_collection (Dict[str, Metric]) – 要同步的指标对象字典。
process_group (int) – 收集指标状态的进程组。默认值:
None(整个进程组)recipient_rank – 目标排名。如果传入字符串“all”,则所有排名都是目标排名。
- Raises:
ValueError – 当
recipient_rank不是整数或字符串“all”时。
示例:
>>> # Assumes world_size of 3. >>> # Process group initialization omitted on each rank. >>> import torch >>> import torch.distributed as dist >>> from torcheval.metrics import Max, Min >>> metrics = {"max" : Max(), "min": Min()} >>> metrics["max"].update(torch.tensor(dist.get_rank())) >>> metrics["min"].update(torch.tensor(dist.get_rank())) >>> synced_metrics = get_synced_metric_collection(metrics) by default metrics sync to Rank 0 >>> synced_metrics["max"].compute() if synced_metrics else None tensor(2.) # Rank 0 None # Rank 1 -- synced_metrics is None None # Rank 2 -- synced_metrics is None >>> synced_metrics["min"].compute() if synced_metrics else None tensor(0.) # Rank 0 None # Rank 1 -- synced_metrics is None None # Rank 2 -- synced_metrics is None you can also sync to all ranks or choose a specific rank >>> synced_metrics = get_synced_metric_collection(metrics, recipient_rank="all") >>> synced_metrics["max"].compute() tensor(2.) # Rank 0 tensor(2.) # Rank 1 tensor(2.) # Rank 2 >>> synced_metrics["min"].compute() tensor(0.) # Rank 0 tensor(0.) # Rank 1 tensor(0.) # Rank 2
- torcheval.metrics.toolkit.get_synced_state_dict(metric: Metric, process_group: ProcessGroup | None = None, recipient_rank: int | Literal['all'] = 0) Dict[str, Any]¶
返回在recipient_rank上同步后的度量状态字典。 在其他rank上返回一个空字典。
- Parameters:
metric – 要同步并获取
state_dict()的度量对象process_group – 收集指标状态的进程组。默认值:
None(整个进程组)recipient_rank – 目标排名。如果传入字符串“all”,则所有排名都是目标排名。
- Returns:
同步指标的状态字典
示例:
>>> # Assumes world_size of 3. >>> # Process group initialization omitted on each rank. >>> import torch >>> import torch.distributed as dist >>> from torcheval import Max >>> max = Max() >>> max.update(torch.tensor(dist.get_rank())) >>> get_synced_state_dict(max) {"max", tensor(2.)} # Rank 0 {} # Rank 1 {} # Rank 2 >>> get_synced_state_dict(max, recipient_rank="all") {"max", tensor(2.)} # Rank 0 {"max", tensor(2.)} # Rank 1 {"max", tensor(2.)} # Rank 2
- torcheval.metrics.toolkit.get_synced_state_dict_collection(metric_collection: MutableMapping[str, Metric], process_group: ProcessGroup | None = None, recipient_rank: int | Literal['all'] = 0) Dict[str, Dict[str, Any]] | None¶
在同步到recipient_rank后返回一组指标的状态字典。在其他rank上返回None。
- Parameters:
metric_collection (Dict[str, Metric]) – 要同步并获取
state_dict()的指标对象process_group – 收集指标状态的进程组。默认值:
None(整个进程组)recipient_rank – 目标排名。如果传入字符串“all”,则所有排名都是目标排名。
- Returns:
同步指标的状态字典集合
示例:
>>> # Assumes world_size of 3. >>> # Process group initialization omitted on each rank. >>> import torch >>> import torch.distributed as dist >>> from torcheval import Max, Min >>> maximum = Max() >>> maximum.update(torch.tensor(dist.get_rank())) >>> minimum = Min() >>> minimum.update(torch.tensor(dist.get_rank())) >>> get_synced_state_dict({"max rank": maximum, "min rank": minimum}) {"max rank": {"max", tensor(2.)}, "min rank": {"min", tensor(0.)}} # Rank 0 None # Rank 1 None # Rank 2 >>> get_synced_state_dict({"max rank": maximum, "min rank": minimum}, recipient_rank="all") {"max rank": {"max", tensor(2.)}, "min rank": {"min", tensor(0.)}} # Rank 0 {"max rank": {"max", tensor(2.)}, "min rank": {"min", tensor(0.)}} # Rank 1 {"max rank": {"max", tensor(2.)}, "min rank": {"min", tensor(0.)}} # Rank 2
- torcheval.metrics.toolkit.reset_metrics(metrics: _TMetrics) _TMetrics¶
重置输入指标并将重置的集合返回给用户。
- Parameters:
metrics – 要重置的指标
示例:
>>> from torcheval.metrics import Max, Min >>> max = Max() >>> min = Min() >>> max.update(torch.tensor(1)).compute() >>> min.update(torch.tensor(2)).compute() >>> max, min = reset_metrics((max, min)) >>> max.compute() tensor(0.) >>> min.compute() tensor(0.)
- torcheval.metrics.toolkit.sync_and_compute(metric: Metric[TComputeReturn], process_group: ProcessGroup | None = None, recipient_rank: int | Literal['all'] = 0) TComputeReturn | None¶
同步指标状态并返回接收者排名上同步指标的
metric.compute()结果。在其他排名上返回None。- Parameters:
metric – 要同步和计算的度量对象。
process_group – 收集指标状态的进程组。默认值:
None(整个进程组)recipient_rank – 目标排名。如果传入字符串“all”,则所有排名都是目标排名。
示例:
>>> # Assumes world_size of 3. >>> # Process group initialization omitted on each rank. >>> import torch >>> import torch.distributed as dist >>> from torcheval.metrics import Max >>> max = Max() >>> max.update(torch.tensor(dist.get_rank())).compute() tensor(0.) # Rank 0 tensor(1.) # Rank 1 tensor(2.) # Rank 2 >>> sync_and_compute(max) tensor(2.) # Rank 0 None # Rank 1 None # Rank 2 >>> sync_and_compute(max, recipient_rank="all") tensor(2.) # Rank 0 tensor(2.) # Rank 1 tensor(2.) # Rank 2
- torcheval.metrics.toolkit.sync_and_compute_collection(metrics: MutableMapping[str, Metric], process_group: ProcessGroup | None = None, recipient_rank: int | Literal['all'] = 0) Dict[str, Any] | None¶
同步一组指标的状态,并返回接收者排名上同步指标的
metric.compute()结果。在其他排名上返回None。- Parameters:
metrics – 要同步和计算的指标对象的字典。
process_group – 收集指标状态的进程组。默认值:
None(整个进程组)recipient_rank – 目标排名。如果传入字符串“all”,则所有排名都是目标排名。
示例:
>>> # Assumes world_size of 3. >>> # Process group initialization omitted on each rank. >>> import torch >>> import torch.distributed as dist >>> from torcheval.metrics import Max, Min >>> metrics = {"max" : Max(), "min": Min()} >>> metrics["max"].update(torch.tensor(dist.get_rank())).compute() tensor(0.) # Rank 0 tensor(1.) # Rank 1 tensor(2.) # Rank 2 >>> metrics["min"].update(torch.tensor(dist.get_rank())).compute() tensor(0.) # Rank 0 tensor(1.) # Rank 1 tensor(2.) # Rank 2 >>> sync_and_compute_collection(metrics) {"max" : tensor(2.), "min": tensor(0.)} # Rank 0 None # Rank 1 None # Rank 2 >>> sync_and_compute_collection(metrics, recipient_rank="all") {"max" : tensor(2.), "min": tensor(0.)} # Rank 0 {"max" : tensor(2.), "min": tensor(0.)} # Rank 1 {"max" : tensor(2.), "min": tensor(0.)} # Rank 2
- torcheval.metrics.toolkit.to_device(metrics: _TMetrics, device: device, *args: Any, **kwargs: Any) _TMetrics¶
将输入指标移动到目标设备,并将移动后的指标返回给用户。
- Parameters:
metrics – 要移动到设备的指标
device – 将指标移动到的设备
*args – 传递给
Metric.to的可变参数**kwargs – 命名参数转发到
Metric.to
示例:
>>> from torcheval.metrics import Max, Min >>> max = Max() >>> min = Min() >>> max, min = to_device((max, min), torch.device("cuda")) >>> max.device torch.device("cuda") >>> min.device torch.device("cuda")