在CIFAR10上使用Ignite进行分布式训练
本教程简要介绍了如何在一个或多个CPU、GPU或TPU上使用Ignite进行分布式训练。我们还将介绍几个辅助函数和Ignite概念(设置常见的训练处理程序、从检查点保存/加载等),您可以轻松地将这些内容整合到您的代码中。
我们将使用分布式训练来训练一个预定义的 ResNet18在 CIFAR10上,使用以下任一配置:
- 单节点,一个或多个GPU
- 多个节点,多个GPU
- 单节点,多CPU
- Google Colab上的TPU
- 在Jupyter Notebook上
我们将使用的分布式训练类型称为数据并行,其中我们:
- 在每个GPU上复制模型
- 将数据集拆分并在不同的子集上拟合模型
- 在每次迭代中传递梯度以保持模型同步
PyTorch 提供了一个 torch.nn.parallel.DistributedDataParallel API 用于此任务,然而支持不同后端 + 配置的实现是繁琐的。在这个示例中,我们将看到如何启用数据分布式训练,只需几行代码即可适应各种后端:
- 计算训练和验证指标
- 设置日志记录(并与ClearML连接)
- 保存最佳模型权重
- 设置学习率调度器
- 使用自动混合精度
必需的依赖项
!pip install pytorch-ignite
用于解析参数
!pip install fire
对于TPUs
VERSION = !curl -s https://api.github.com/repos/pytorch/xla/releases/latest | grep -Po '"tag_name": "v\K.*?(?=")'
VERSION = VERSION[0].rstrip('.0') # remove trailing zero
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-{VERSION}-cp37-cp37m-linux_x86_64.whl
使用 ClearML(可选)
我们可以启用ClearML的日志记录来跟踪实验,如下所示:
- 确保您拥有一个ClearML账户: https://app.community.clear.ml/
- 创建凭证:个人资料 > 创建新凭证 > 复制到剪贴板
- 运行
clearml-init并粘贴凭据
!pip install clearml
!clearml-init
在下面的config中指定with_clearml=True,并在仪表板上监控实验。请参阅本教程的末尾,查看此类实验的示例。
下载数据
首先让我们下载我们的数据,这些数据稍后可以被所有进程用来实例化我们的数据加载器。以下命令将下载CIFAR10数据集到一个名为cifar10的文件夹中。
!python -c "from torchvision.datasets import CIFAR10; CIFAR10('cifar10', download=True)"
通用配置
我们维护一个config字典,可以扩展或更改以存储训练期间所需的参数。当我们稍后使用这些参数时,可以参考这段代码。
config = {
"seed": 543,
"data_path": "cifar10",
"output_path": "output-cifar10/",
"model": "resnet18",
"batch_size": 512,
"momentum": 0.9,
"weight_decay": 1e-4,
"num_workers": 2,
"num_epochs": 5,
"learning_rate": 0.4,
"num_warmup_epochs": 1,
"validate_every": 3,
"checkpoint_every": 200,
"backend": None,
"resume_from": None,
"log_every_iters": 15,
"nproc_per_node": None,
"with_clearml": False,
"with_amp": False,
}
基本设置
导入
from datetime import datetime
from pathlib import Path
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, models
from torchvision.transforms import (
Compose,
Normalize,
Pad,
RandomCrop,
RandomHorizontalFlip,
ToTensor,
)
import ignite
import ignite.distributed as idist
from ignite.contrib.engines import common
from ignite.handlers import PiecewiseLinear
from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.handlers import Checkpoint, global_step_from_engine
from ignite.metrics import Accuracy, Loss
from ignite.utils import manual_seed, setup_logger
接下来我们将借助idist中的auto_方法(ignite.distributed)使我们的数据加载器、模型和优化器自动适应当前配置backend=None(非分布式)或像nccl、gloo和xla-tpu(分布式)这样的后端。
请注意,我们可以自由地部分使用或不使用auto_方法,而是可以实现一些自定义的内容。
数据加载器
接下来,我们将从data_path实例化训练和测试数据集,对其应用转换,并通过get_train_test_datasets()返回它们。
def get_train_test_datasets(path):
train_transform = Compose(
[
Pad(4),
RandomCrop(32, fill=128),
RandomHorizontalFlip(),
ToTensor(),
Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
]
)
test_transform = Compose(
[
ToTensor(),
Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
]
)
train_ds = datasets.CIFAR10(
root=path, train=True, download=False, transform=train_transform
)
test_ds = datasets.CIFAR10(
root=path, train=False, download=False, transform=test_transform
)
return train_ds, test_ds
最后,我们将数据集传递给
auto_dataloader()。
def get_dataflow(config):
train_dataset, test_dataset = get_train_test_datasets(config["data_path"])
train_loader = idist.auto_dataloader(
train_dataset,
batch_size=config["batch_size"],
num_workers=config["num_workers"],
shuffle=True,
drop_last=True,
)
test_loader = idist.auto_dataloader(
test_dataset,
batch_size=2 * config["batch_size"],
num_workers=config["num_workers"],
shuffle=False,
)
return train_loader, test_loader
模型
我们检查在config中给出的模型是否存在于torchvision.models中,将最后一层更改为输出10个类别(如CIFAR10中所示),并将其传递给auto_model(),使其自动适应非分布式和分布式配置。
def get_model(config):
model_name = config["model"]
if model_name in models.__dict__:
fn = models.__dict__[model_name]
else:
raise RuntimeError(f"Unknown model name {model_name}")
model = idist.auto_model(fn(num_classes=10))
return model
优化器
然后我们可以使用config中的超参数来设置优化器,并通过auto_optim()传递它。
def get_optimizer(config, model):
optimizer = optim.SGD(
model.parameters(),
lr=config["learning_rate"],
momentum=config["momentum"],
weight_decay=config["weight_decay"],
nesterov=True,
)
optimizer = idist.auto_optim(optimizer)
return optimizer
标准
我们将损失函数放在device上。
def get_criterion():
return nn.CrossEntropyLoss().to(idist.device())
LR 调度器
我们将使用 PiecewiseLinear,这是 various LR Schedulers Ignite 提供的一种。
def get_lr_scheduler(config, optimizer):
milestones_values = [
(0, 0.0),
(config["num_iters_per_epoch"] * config["num_warmup_epochs"], config["learning_rate"]),
(config["num_iters_per_epoch"] * config["num_epochs"], 0.0),
]
lr_scheduler = PiecewiseLinear(
optimizer, param_name="lr", milestones_values=milestones_values
)
return lr_scheduler
训练师
保存模型
我们可以使用处理程序(在ClearML的情况下)或简单地将检查点文件的路径传递给save_handler来创建检查点:
如果指定了with-clearml=True,我们将使用ClearMLSaver()将模型保存在ClearML的文件服务器中。
def get_save_handler(config):
if config["with_clearml"]:
from ignite.contrib.handlers.clearml_logger import ClearMLSaver
return ClearMLSaver(dirname=config["output_path"])
return config["output_path"]
从检查点恢复
如果提供了检查点文件路径,我们可以通过加载文件从中恢复训练。
def load_checkpoint(resume_from):
checkpoint_fp = Path(resume_from)
assert (
checkpoint_fp.exists()
), f"Checkpoint '{checkpoint_fp.as_posix()}' is not found"
checkpoint = torch.load(checkpoint_fp.as_posix(), map_location="cpu")
return checkpoint
创建训练器
最后,我们可以通过四个步骤创建我们的 trainer:
- 使用
create_supervised_trainer()创建一个trainer对象,该方法内部定义了处理单个批次的步骤: - 将批次移动到当前分布式配置中使用的
device。 - 将
model设置为train()模式。 - 通过将输入传递给
model并计算loss来执行前向传播。如果启用了AMP,则此步骤将在autocast上进行,这允许此步骤以混合精度运行。 - 执行反向传播。如果启用了 自动混合精度 (AMP) (加速大型神经网络的计算并减少内存使用,同时保持性能),那么损失将在调用
backward()之前被 缩放,step()优化器时将丢弃包含 NaN 的批次,并且 更新() 下一次迭代的缩放。 - 将损失存储为
batch loss在state.output中。
在内部,创建trainer的上述步骤将如下所示:
def train_step(engine, batch):
x, y = batch[0], batch[1]
if x.device != device:
x = x.to(device, non_blocking=True)
y = y.to(device, non_blocking=True)
model.train()
with autocast(enabled=with_amp):
y_pred = model(x)
loss = criterion(y_pred, y)
optimizer.zero_grad()
scaler.scale(loss).backward() # If with_amp=False, this is equivalent to loss.backward()
scaler.step(optimizer) # If with_amp=False, this is equivalent to optimizer.step()
scaler.update() # If with_amp=False, this step does nothing
return {"batch loss": loss.item()}
trainer = Engine(train_step)
- 设置一些常见的 Ignite 训练处理器。你可以单独执行此操作,也可以使用
setup_common_training_handlers(),该方法接受
trainer和数据集的一部分 (train_sampler) 以及:
- 一个字典,映射在检查点中保存的内容(
to_save)和保存的频率(save_every_iters)。 - 学习率调度器
train_step()的输出- 其他处理程序
- 如果提供了
resume_from文件路径,从检查点文件中加载对象to_save的状态。
def create_trainer(
model, optimizer, criterion, lr_scheduler, train_sampler, config, logger
):
device = idist.device()
amp_mode = None
scaler = False
trainer = create_supervised_trainer(
model,
optimizer,
criterion,
device=device,
non_blocking=True,
output_transform=lambda x, y, y_pred, loss: {"batch loss": loss.item()},
amp_mode="amp" if config["with_amp"] else None,
scaler=config["with_amp"],
)
trainer.logger = logger
to_save = {
"trainer": trainer,
"model": model,
"optimizer": optimizer,
"lr_scheduler": lr_scheduler,
}
metric_names = [
"batch loss",
]
common.setup_common_training_handlers(
trainer=trainer,
train_sampler=train_sampler,
to_save=to_save,
save_every_iters=config["checkpoint_every"],
save_handler=get_save_handler(config),
lr_scheduler=lr_scheduler,
output_names=metric_names if config["log_every_iters"] > 0 else None,
with_pbars=False,
clear_cuda_cache=False,
)
if config["resume_from"] is not None:
checkpoint = load_checkpoint(config["resume_from"])
Checkpoint.load_objects(to_load=to_save, checkpoint=checkpoint)
return trainer
评估器
评估器将通过
create_supervised_evaluator() 创建,其内部将:
- 将
model设置为eval()模式。 - 将批次移动到当前分布式配置中使用的
device。 - 执行前向传播。如果启用了AMP,
autocast将会开启。 - 将预测结果和标签存储在
state.output中以计算指标。
它还会附加传递给evaluator的Ignite指标。
def create_evaluator(model, metrics, config):
device = idist.device()
amp_mode = "amp" if config["with_amp"] else None
evaluator = create_supervised_evaluator(
model, metrics=metrics, device=device, non_blocking=True, amp_mode=amp_mode
)
return evaluator
培训
在我们开始训练之前,我们必须在主进程(rank = 0)上设置一些东西:
- 创建文件夹以存储检查点、最佳模型和tensorboard日志输出,格式为 - model_backend_rank_time。
- 如果使用ClearML文件服务器来保存模型,则必须创建一个
Task,并传递我们的config字典和实验中特定的超参数。
def setup_rank_zero(logger, config):
device = idist.device()
now = datetime.now().strftime("%Y%m%d-%H%M%S")
output_path = config["output_path"]
folder_name = (
f"{config['model']}_backend-{idist.backend()}-{idist.get_world_size()}_{now}"
)
output_path = Path(output_path) / folder_name
if not output_path.exists():
output_path.mkdir(parents=True)
config["output_path"] = output_path.as_posix()
logger.info(f"Output path: {config['output_path']}")
if config["with_clearml"]:
from clearml import Task
task = Task.init("CIFAR10-Training", task_name=output_path.stem)
task.connect_configuration(config)
# Log hyper parameters
hyper_params = [
"model",
"batch_size",
"momentum",
"weight_decay",
"num_epochs",
"learning_rate",
"num_warmup_epochs",
]
task.connect({k: v for k, v in config.items()})
日志记录
此步骤是可选的,但是,我们可以传递一个
setup_logger() 对象给 log_basic_info() 并记录所有基本信息,例如不同版本、当前配置、当前进程(由其本地等级标识)使用的 device 和 backend,以及进程数量(世界大小)。idist (ignite.distributed) 提供了几个实用函数,如
get_local_rank()、
backend()、
get_world_size() 等,以实现这一点。
def log_basic_info(logger, config):
logger.info(f"Train on CIFAR10")
logger.info(f"- PyTorch version: {torch.__version__}")
logger.info(f"- Ignite version: {ignite.__version__}")
if torch.cuda.is_available():
# explicitly import cudnn as torch.backends.cudnn can not be pickled with hvd spawning procs
from torch.backends import cudnn
logger.info(
f"- GPU Device: {torch.cuda.get_device_name(idist.get_local_rank())}"
)
logger.info(f"- CUDA version: {torch.version.cuda}")
logger.info(f"- CUDNN version: {cudnn.version()}")
logger.info("\n")
logger.info("Configuration:")
for key, value in config.items():
logger.info(f"\t{key}: {value}")
logger.info("\n")
if idist.get_world_size() > 1:
logger.info("\nDistributed setting:")
logger.info(f"\tbackend: {idist.backend()}")
logger.info(f"\tworld size: {idist.get_world_size()}")
logger.info("\n")
这是一个标准的实用函数,用于在每经过validate_every个周期后记录train和val指标。
def log_metrics(logger, epoch, elapsed, tag, metrics):
metrics_output = "\n".join([f"\t{k}: {v}" for k, v in metrics.items()])
logger.info(
f"\nEpoch {epoch} - Evaluation time (seconds): {elapsed:.2f} - {tag} metrics:\n {metrics_output}"
)
开始训练
这是主要逻辑所在的地方,即我们将从这里调用上述所有函数:
- 基本设置
- 我们设置一个
manual_seed()和setup_logger(),然后记录所有基本信息。 - 初始化
dataloaders,model,optimizer,criterion和lr_scheduler。 - 我们使用上述对象来创建一个
trainer。 - 评估器
- 定义一些相关的 Ignite 指标,例如
Accuracy()和Loss()。 - 创建两个评估器:
train_evaluator和val_evaluator,分别在train_dataloader和val_dataloader上计算指标,然而val_evaluator会根据验证指标存储最佳模型。 - 定义
run_validation()以计算两个数据加载器上的指标并记录它们。然后我们将这个函数附加到trainer,以便在validate_every个周期后以及训练完成后运行。 - 在主进程中使用
setup_tb_logging()设置 TensorBoard 日志记录,以便可以记录训练和验证指标及学习率。 - 定义一个
Checkpoint()对象来存储两个最佳模型 (n_saved) 根据验证准确率 (在metrics中定义为Accuracy()),并将其附加到val_evaluator,以便每次运行val_evaluator时都可以执行。 - 尝试在
train_loader上训练num_epochs次 - 训练完成后关闭 Tensorboard 记录器。
def training(local_rank, config):
rank = idist.get_rank()
manual_seed(config["seed"] + rank)
logger = setup_logger(name="CIFAR10-Training")
log_basic_info(logger, config)
if rank == 0:
setup_rank_zero(logger, config)
train_loader, val_loader = get_dataflow(config)
model = get_model(config)
optimizer = get_optimizer(config, model)
criterion = get_criterion()
config["num_iters_per_epoch"] = len(train_loader)
lr_scheduler = get_lr_scheduler(config, optimizer)
trainer = create_trainer(
model, optimizer, criterion, lr_scheduler, train_loader.sampler, config, logger
)
metrics = {
"Accuracy": Accuracy(),
"Loss": Loss(criterion),
}
train_evaluator = create_evaluator(model, metrics, config)
val_evaluator = create_evaluator(model, metrics, config)
def run_validation(engine):
epoch = trainer.state.epoch
state = train_evaluator.run(train_loader)
log_metrics(logger, epoch, state.times["COMPLETED"], "train", state.metrics)
state = val_evaluator.run(val_loader)
log_metrics(logger, epoch, state.times["COMPLETED"], "val", state.metrics)
trainer.add_event_handler(
Events.EPOCH_COMPLETED(every=config["validate_every"]) | Events.COMPLETED,
run_validation,
)
if rank == 0:
evaluators = {"train": train_evaluator, "val": val_evaluator}
tb_logger = common.setup_tb_logging(
config["output_path"], trainer, optimizer, evaluators=evaluators
)
best_model_handler = Checkpoint(
{"model": model},
get_save_handler(config),
filename_prefix="best",
n_saved=2,
global_step_transform=global_step_from_engine(trainer),
score_name="val_accuracy",
score_function=Checkpoint.get_default_score_fn("Accuracy"),
)
val_evaluator.add_event_handler(
Events.COMPLETED,
best_model_handler,
)
try:
trainer.run(train_loader, max_epochs=config["num_epochs"])
except Exception as e:
logger.exception("")
raise e
if rank == 0:
tb_logger.close()
运行分布式代码
我们可以轻松地使用上下文管理器Parallel运行上述代码:
with idist.Parallel(backend=backend, nproc_per_node=nproc_per_node) as parallel:
parallel.run(training, config)
Parallel 使我们能够以无缝的方式在所有支持的分布式后端和非分布式配置上运行相同的代码。这里的后端指的是分布式通信框架。了解更多关于选择哪个后端的信息
这里。Parallel 接受一个 backend 并且要么:
生成
nproc_per_node子进程,并根据提供的后端初始化一个处理组(适用于独立脚本)。
这种方式使用torch.multiprocessing.spawn,是生成进程的默认方式。然而,由于初始化开销,这种方式较慢。
或
仅根据后端初始化处理组(与
torchrun、horovodrun等工具一起使用时很有用)。
推荐使用这种方法,因为训练速度更快,并且更容易扩展到多个脚本。
我们可以将额外的信息作为spawn_kwargs传递给Parallel,如下所示。
注意: 建议将分布式代码作为脚本运行以便于使用,但我们也可以在Jupyter笔记本中生成进程(参见教程末尾)。完整的脚本代码可以在这里找到。请选择以下建议的方式之一来运行脚本。
单节点,一个或多个GPU
我们将使用fire将run()转换为CLI,直接使用run()中解析的参数,并在脚本中开始训练:
import fire
def run(backend=None, **spawn_kwargs):
config["backend"] = backend
with idist.Parallel(backend=config["backend"], **spawn_kwargs) as parallel:
parallel.run(training, config)
if __name__ == "__main__":
fire.Fire({"run": run})
然后我们可以运行脚本(例如,对于2个GPU):
使用 torchrun 运行(推荐)
torchrun --nproc_per_node=2 main.py run --backend="nccl"
使用内部生成运行 (torch.multiprocessing.spawn)
python -u main.py run --backend="nccl" --nproc_per_node=2
使用 horovodrun 运行
请确保backend=horovod。下面的np是进程的数量。
horovodrun -np=2 python -u main.py run --backend="horovod"
多节点,多GPU
脚本中的代码类似于单节点,一个或多个GPU:
import fire
def run(backend=None, **spawn_kwargs):
config["backend"] = backend
with idist.Parallel(backend=config["backend"], **spawn_kwargs) as parallel:
parallel.run(training, config)
if __name__ == "__main__":
fire.Fire({"run": run})
唯一的改变是我们如何运行脚本。我们需要提供主节点的IP地址及其端口以及节点排名。例如,对于2个节点(nnodes)和2个GPU(nproc_per_node),我们可以:
使用 torchrun 运行(推荐)
在节点0(主节点)上:
torchrun \
--nnodes=2 \
--nproc_per_node=2 \
--node_rank=0 \
--master_addr=master --master_port=2222 \
main.py run --backend="nccl"
在节点1(工作节点)上:
torchrun \
--nnodes=2 \
--nproc_per_node=2 \
--node_rank=1 \
--master_addr=master --master_port=2222 \
main.py run --backend="nccl"
使用内部生成运行
在节点 0 上:
python -u main.py run
--nnodes=2 \
--nproc_per_node=2 \
--node_rank=0 \
--master_addr=master --master_port=2222 \
--backend="nccl"
在节点1上:
python -u main.py run
--nnodes=2 \
--nproc_per_node=2 \
--node_rank=1 \
--master_addr=master --master_port=2222 \
--backend="nccl"
使用 horovodrun 运行
np 下面是通过 nnodes x nproc_per_node 计算的。
horovodrun -np 4 -H hostname1:2,hostname2:2 python -u main.py run --backend="horovod"
单节点,多CPU
这与单节点、一个或多个GPU类似。唯一的区别是在运行脚本时,backend=gloo 而不是 nccl。
Google Colab上的TPUs
转到运行时 > 更改运行时类型并选择硬件加速器 = TPU。
nproc_per_node = 8
config["backend"] = "xla-tpu"
with idist.Parallel(backend=config["backend"], nproc_per_node=nproc_per_node) as parallel:
parallel.run(training, config)
2021-09-14 17:01:35,425 ignite.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'xla-tpu'
2021-09-14 17:01:35,427 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes:
nproc_per_node: 8
nnodes: 1
node_rank: 0
2021-09-14 17:01:35,428 ignite.distributed.launcher.Parallel INFO: Spawn function '<function training at 0x7fda404f4680>' in 8 processes
2021-09-14 17:01:47,607 CIFAR10-Training INFO: Train on CIFAR10
2021-09-14 17:01:47,639 CIFAR10-Training INFO: - PyTorch version: 1.8.2+cpu
2021-09-14 17:01:47,658 CIFAR10-Training INFO: - Ignite version: 0.4.6
2021-09-14 17:01:47,678 CIFAR10-Training INFO:
2021-09-14 17:01:47,697 CIFAR10-Training INFO: Configuration:
2021-09-14 17:01:47,721 CIFAR10-Training INFO: seed: 543
2021-09-14 17:01:47,739 CIFAR10-Training INFO: data_path: cifar10
2021-09-14 17:01:47,765 CIFAR10-Training INFO: output_path: output-cifar10/
2021-09-14 17:01:47,786 CIFAR10-Training INFO: model: resnet18
2021-09-14 17:01:47,810 CIFAR10-Training INFO: batch_size: 512
2021-09-14 17:01:47,833 CIFAR10-Training INFO: momentum: 0.9
2021-09-14 17:01:47,854 CIFAR10-Training INFO: weight_decay: 0.0001
2021-09-14 17:01:47,867 CIFAR10-Training INFO: num_workers: 2
2021-09-14 17:01:47,887 CIFAR10-Training INFO: num_epochs: 5
2021-09-14 17:01:47,902 CIFAR10-Training INFO: learning_rate: 0.4
2021-09-14 17:01:47,922 CIFAR10-Training INFO: num_warmup_epochs: 1
2021-09-14 17:01:47,940 CIFAR10-Training INFO: validate_every: 3
2021-09-14 17:01:47,949 CIFAR10-Training INFO: checkpoint_every: 200
2021-09-14 17:01:47,960 CIFAR10-Training INFO: backend: xla-tpu
2021-09-14 17:01:47,967 CIFAR10-Training INFO: resume_from: None
2021-09-14 17:01:47,975 CIFAR10-Training INFO: log_every_iters: 15
2021-09-14 17:01:47,984 CIFAR10-Training INFO: nproc_per_node: None
2021-09-14 17:01:48,003 CIFAR10-Training INFO: with_clearml: False
2021-09-14 17:01:48,019 CIFAR10-Training INFO: with_amp: False
2021-09-14 17:01:48,040 CIFAR10-Training INFO:
2021-09-14 17:01:48,059 CIFAR10-Training INFO:
Distributed setting:
2021-09-14 17:01:48,079 CIFAR10-Training INFO: backend: xla-tpu
2021-09-14 17:01:48,098 CIFAR10-Training INFO: world size: 8
2021-09-14 17:01:48,109 CIFAR10-Training INFO:
2021-09-14 17:01:48,130 CIFAR10-Training INFO: Output path: output-cifar10/resnet18_backend-xla-tpu-8_20210914-170148
2021-09-14 17:01:50,917 ignite.distributed.auto.auto_dataloader INFO: Use data loader kwargs for dataset 'Dataset CIFAR10':
{'batch_size': 64, 'num_workers': 2, 'drop_last': True, 'sampler': <torch.utils.data.distributed.DistributedSampler object at 0x7fda404d0750>, 'pin_memory': False}
2021-09-14 17:01:50,950 ignite.distributed.auto.auto_dataloader INFO: DataLoader is wrapped by `MpDeviceLoader` on XLA
2021-09-14 17:01:50,975 ignite.distributed.auto.auto_dataloader INFO: Use data loader kwargs for dataset 'Dataset CIFAR10':
{'batch_size': 128, 'num_workers': 2, 'sampler': <torch.utils.data.distributed.DistributedSampler object at 0x7fda404d0910>, 'pin_memory': False}
2021-09-14 17:01:51,000 ignite.distributed.auto.auto_dataloader INFO: DataLoader is wrapped by `MpDeviceLoader` on XLA
2021-09-14 17:01:53,866 CIFAR10-Training INFO: Engine run starting with max_epochs=5.
2021-09-14 17:02:23,913 CIFAR10-Training INFO: Epoch[1] Complete. Time taken: 00:00:30
2021-09-14 17:02:41,945 CIFAR10-Training INFO: Epoch[2] Complete. Time taken: 00:00:18
2021-09-14 17:03:13,870 CIFAR10-Training INFO:
Epoch 3 - Evaluation time (seconds): 14.00 - train metrics:
Accuracy: 0.32997744845360827
Loss: 1.7080145767054606
2021-09-14 17:03:19,283 CIFAR10-Training INFO:
Epoch 3 - Evaluation time (seconds): 5.39 - val metrics:
Accuracy: 0.3424
Loss: 1.691359375
2021-09-14 17:03:19,289 CIFAR10-Training INFO: Epoch[3] Complete. Time taken: 00:00:37
2021-09-14 17:03:37,535 CIFAR10-Training INFO: Epoch[4] Complete. Time taken: 00:00:18
2021-09-14 17:03:55,927 CIFAR10-Training INFO: Epoch[5] Complete. Time taken: 00:00:18
2021-09-14 17:04:07,598 CIFAR10-Training INFO:
Epoch 5 - Evaluation time (seconds): 11.66 - train metrics:
Accuracy: 0.42823775773195877
Loss: 1.4969784451514174
2021-09-14 17:04:10,190 CIFAR10-Training INFO:
Epoch 5 - Evaluation time (seconds): 2.56 - val metrics:
Accuracy: 0.4412
Loss: 1.47838994140625
2021-09-14 17:04:10,244 CIFAR10-Training INFO: Engine run complete. Time taken: 00:02:16
2021-09-14 17:04:10,313 ignite.distributed.launcher.Parallel INFO: End of run
在 Jupyter Notebook 中运行
我们不得不在笔记本中生成进程,因此,我们将使用内部生成来实现这一点。对于多个GPU,使用backend=nccl,对于多个CPU,使用backend=gloo。
spawn_kwargs = {}
spawn_kwargs["start_method"] = "fork"
spawn_kwargs["nproc_per_node"] = 2
config["backend"] = "nccl"
with idist.Parallel(backend=config["backend"], **spawn_kwargs) as parallel:
parallel.run(training, config)
2021-09-14 19:15:15,335 ignite.distributed.launcher.Parallel INFO: Initialized distributed launcher with backend: 'nccl'
2021-09-14 19:15:15,337 ignite.distributed.launcher.Parallel INFO: - Parameters to spawn processes:
nproc_per_node: 2
nnodes: 1
node_rank: 0
start_method: fork
2021-09-14 19:15:15,338 ignite.distributed.launcher.Parallel INFO: Spawn function '<function training at 0x7f0e44c88dd0>' in 2 processes
2021-09-14 19:15:18,910 CIFAR10-Training INFO: Train on CIFAR10
2021-09-14 19:15:18,911 CIFAR10-Training INFO: - PyTorch version: 1.9.0
2021-09-14 19:15:18,912 CIFAR10-Training INFO: - Ignite version: 0.4.6
2021-09-14 19:15:18,913 CIFAR10-Training INFO: - GPU Device: GeForce GTX 1080 Ti
2021-09-14 19:15:18,913 CIFAR10-Training INFO: - CUDA version: 11.1
2021-09-14 19:15:18,914 CIFAR10-Training INFO: - CUDNN version: 8005
2021-09-14 19:15:18,915 CIFAR10-Training INFO:
2021-09-14 19:15:18,916 CIFAR10-Training INFO: Configuration:
2021-09-14 19:15:18,917 CIFAR10-Training INFO: seed: 543
2021-09-14 19:15:18,918 CIFAR10-Training INFO: data_path: cifar10
2021-09-14 19:15:18,919 CIFAR10-Training INFO: output_path: output-cifar10/
2021-09-14 19:15:18,920 CIFAR10-Training INFO: model: resnet18
2021-09-14 19:15:18,921 CIFAR10-Training INFO: batch_size: 512
2021-09-14 19:15:18,922 CIFAR10-Training INFO: momentum: 0.9
2021-09-14 19:15:18,923 CIFAR10-Training INFO: weight_decay: 0.0001
2021-09-14 19:15:18,924 CIFAR10-Training INFO: num_workers: 2
2021-09-14 19:15:18,925 CIFAR10-Training INFO: num_epochs: 5
2021-09-14 19:15:18,925 CIFAR10-Training INFO: learning_rate: 0.4
2021-09-14 19:15:18,926 CIFAR10-Training INFO: num_warmup_epochs: 1
2021-09-14 19:15:18,927 CIFAR10-Training INFO: validate_every: 3
2021-09-14 19:15:18,928 CIFAR10-Training INFO: checkpoint_every: 200
2021-09-14 19:15:18,929 CIFAR10-Training INFO: backend: nccl
2021-09-14 19:15:18,929 CIFAR10-Training INFO: resume_from: None
2021-09-14 19:15:18,930 CIFAR10-Training INFO: log_every_iters: 15
2021-09-14 19:15:18,931 CIFAR10-Training INFO: nproc_per_node: None
2021-09-14 19:15:18,931 CIFAR10-Training INFO: with_clearml: False
2021-09-14 19:15:18,932 CIFAR10-Training INFO: with_amp: False
2021-09-14 19:15:18,933 CIFAR10-Training INFO:
2021-09-14 19:15:18,933 CIFAR10-Training INFO:
Distributed setting:
2021-09-14 19:15:18,934 CIFAR10-Training INFO: backend: nccl
2021-09-14 19:15:18,935 CIFAR10-Training INFO: world size: 2
2021-09-14 19:15:18,935 CIFAR10-Training INFO:
2021-09-14 19:15:18,936 CIFAR10-Training INFO: Output path: output-cifar10/resnet18_backend-nccl-2_20210914-191518
2021-09-14 19:15:19,725 ignite.distributed.auto.auto_dataloader INFO: Use data loader kwargs for dataset 'Dataset CIFAR10':
{'batch_size': 256, 'num_workers': 1, 'drop_last': True, 'sampler': <torch.utils.data.distributed.DistributedSampler object at 0x7f0f8b7df8d0>, 'pin_memory': True}
2021-09-14 19:15:19,727 ignite.distributed.auto.auto_dataloader INFO: Use data loader kwargs for dataset 'Dataset CIFAR10':
{'batch_size': 512, 'num_workers': 1, 'sampler': <torch.utils.data.distributed.DistributedSampler object at 0x7f0e44ca9ad0>, 'pin_memory': True}
2021-09-14 19:15:19,873 ignite.distributed.auto.auto_model INFO: Apply torch DistributedDataParallel on model, device id: 0
2021-09-14 19:15:20,049 CIFAR10-Training INFO: Engine run starting with max_epochs=5.
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448265233/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448265233/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
2021-09-14 19:15:28,800 CIFAR10-Training INFO: Epoch[1] Complete. Time taken: 00:00:09
2021-09-14 19:15:37,474 CIFAR10-Training INFO: Epoch[2] Complete. Time taken: 00:00:09
2021-09-14 19:15:54,675 CIFAR10-Training INFO:
Epoch 3 - Evaluation time (seconds): 8.50 - train metrics:
Accuracy: 0.5533988402061856
Loss: 1.2227583423103254
2021-09-14 19:15:56,077 CIFAR10-Training INFO:
Epoch 3 - Evaluation time (seconds): 1.36 - val metrics:
Accuracy: 0.5699
Loss: 1.1869916015625
2021-09-14 19:15:56,079 CIFAR10-Training INFO: Epoch[3] Complete. Time taken: 00:00:19
2021-09-14 19:16:04,686 CIFAR10-Training INFO: Epoch[4] Complete. Time taken: 00:00:09
2021-09-14 19:16:13,347 CIFAR10-Training INFO: Epoch[5] Complete. Time taken: 00:00:09
2021-09-14 19:16:21,857 CIFAR10-Training INFO:
Epoch 5 - Evaluation time (seconds): 8.46 - train metrics:
Accuracy: 0.6584246134020618
Loss: 0.9565292830319748
2021-09-14 19:16:23,269 CIFAR10-Training INFO:
Epoch 5 - Evaluation time (seconds): 1.38 - val metrics:
Accuracy: 0.6588
Loss: 0.9517111328125
2021-09-14 19:16:23,271 CIFAR10-Training INFO: Engine run complete. Time taken: 00:01:03
2021-09-14 19:16:23,547 ignite.distributed.launcher.Parallel INFO: End of run
重要链接
- 完整的代码可以在 这里找到。
- Example of the logs of a ClearML experiment run on this code: