PyTorch-Ignite PyTorch-Ignite

入门指南

欢迎来到PyTorch-Ignite的快速入门指南,该指南涵盖了启动和运行项目的基本要素,同时介绍了Ignite的基本概念。只需几行代码,您就可以训练和验证您的模型。完整的代码可以在本指南的末尾找到。

先决条件

本教程假设您熟悉以下内容:

  1. Python和深度学习的基础
  2. PyTorch代码的结构

安装

来自 pip

pip install pytorch-ignite

来自 conda

conda install ignite -c pytorch

查看 这里获取其他安装 选项。

代码

导入以下内容:

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.models import resnet18
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.engine import Engine, Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss
from ignite.handlers import ModelCheckpoint
from ignite.contrib.handlers import TensorboardLogger, global_step_from_engine

通过将device设置为cuda(如果可用)或cpu来加速。

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

定义你的模型类或使用下面预定义的ResNet18模型(针对MNIST进行了修改),实例化它并将其移动到设备上:

class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        
        # Changed the output layer to output 10 classes instead of 1000 classes
        self.model = resnet18(num_classes=10)

        # Changed the input layer to take grayscale images for MNIST instead of RGB images
        self.model.conv1 = nn.Conv2d(
            1, 64, kernel_size=3, padding=1, bias=False
        )

    def forward(self, x):
        return self.model(x)


model = Net().to(device)

现在让我们定义训练和验证数据集(作为 torch.utils.data.DataLoader) 并将它们分别存储在train_loaderval_loader中。我们使用了 MNIST 数据集以便于理解。

data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])

train_loader = DataLoader(
    MNIST(download=True, root=".", transform=data_transform, train=True), batch_size=128, shuffle=True
)

val_loader = DataLoader(
    MNIST(download=True, root=".", transform=data_transform, train=False), batch_size=256, shuffle=False
)

最后,我们将指定优化器和损失函数:

optimizer = torch.optim.RMSprop(model.parameters(), lr=0.005)
criterion = nn.CrossEntropyLoss()

我们已经完成了项目重要部分的设置。 PyTorch-Ignite 将处理所有其他样板代码,我们将在下面看到。 接下来,我们必须通过将我们的模型、优化器和损失函数传递给 create_supervised_trainer 来定义一个训练引擎,并通过将 Ignite 的现成 metrics 和模型传递给 create_supervised_evaluator 来定义两个评估引擎。我们为训练和验证定义了单独的评估引擎,因为它们将服务于不同的功能,我们将在本教程的后面部分看到:

trainer = create_supervised_trainer(model, optimizer, criterion, device)

val_metrics = {
    "accuracy": Accuracy(),
    "loss": Loss(criterion)
}

train_evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)
val_evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)

对象 trainertrain_evaluatorval_evaluator 都是 Engine 的实例 - Ignite 的主要组件,本质上是对训练或验证循环的抽象。

如果您需要对训练和验证循环进行更多控制,您可以通过将步骤逻辑包装在Engine中来创建自定义的trainertrain_evaluatorval_evaluator对象:

def train_step(engine, batch):
    model.train()
    optimizer.zero_grad()
    x, y = batch[0].to(device), batch[1].to(device)
    y_pred = model(x)
    loss = criterion(y_pred, y)
    loss.backward()
    optimizer.step()
    return loss.item()

trainer = Engine(train_step)

def validation_step(engine, batch):
    model.eval()
    with torch.no_grad():
        x, y = batch[0].to(device), batch[1].to(device)
        y_pred = model(x)
        return y_pred, y

train_evaluator = Engine(validation_step)
val_evaluator = Engine(validation_step)

# Attach metrics to the evaluators
for name, metric in val_metrics.items():
    metric.attach(train_evaluator, name)

for name, metric in val_metrics.items():
    metric.attach(val_evaluator, name)

我们可以通过添加各种事件处理程序来进一步自定义代码。 Engine 允许在运行期间触发的各种事件上添加处理程序。当事件被触发时,附加的处理程序(函数)将被执行。因此,为了记录日志,我们添加了一个函数,在每 log_interval 次迭代结束时执行:

# How many batches to wait before logging training status
log_interval = 100
@trainer.on(Events.ITERATION_COMPLETED(every=log_interval))
def log_training_loss(engine):
    print(f"Epoch[{engine.state.epoch}], Iter[{engine.state.iteration}] Loss: {engine.state.output:.2f}")

或者等效地不使用装饰器,而是通过 add_event_handler将处理函数附加到trainer

def log_training_loss(engine):
    print(f"Epoch[{engine.state.epoch}], Iter[{engine.state.iteration}] Loss: {engine.state.output:.2f}")

trainer.add_event_handler(Events.ITERATION_COMPLETED, log_training_loss)

在训练过程中,当一个epoch结束时,我们可以通过在train_loader上运行train_evaluator和在val_loader上运行val_evaluator来计算训练和验证指标。因此,我们将在trainer上附加两个额外的处理程序,当一个epoch完成时:

@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(trainer):
    train_evaluator.run(train_loader)
    metrics = train_evaluator.state.metrics
    print(f"Training Results - Epoch[{trainer.state.epoch}] Avg accuracy: {metrics['accuracy']:.2f} Avg loss: {metrics['loss']:.2f}")


@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(trainer):
    val_evaluator.run(val_loader)
    metrics = val_evaluator.state.metrics
    print(f"Validation Results - Epoch[{trainer.state.epoch}] Avg accuracy: {metrics['accuracy']:.2f} Avg loss: {metrics['loss']:.2f}")

我们可以使用 ModelCheckpoint() 如下所示,在每个epoch完成后保存由某个指标(这里是准确率)确定的 n_saved 个最佳模型。我们将 model_checkpoint 附加到 val_evaluator 上,因为我们希望在验证数据集上而不是训练数据集上获得最高准确率的两个模型。这就是为什么我们之前定义了两个独立的评估器(val_evaluatortrain_evaluator)。

# Score function to return current value of any metric we defined above in val_metrics
def score_function(engine):
    return engine.state.metrics["accuracy"]

# Checkpoint to store n_saved best models wrt score function
model_checkpoint = ModelCheckpoint(
    "checkpoint",
    n_saved=2,
    filename_prefix="best",
    score_function=score_function,
    score_name="accuracy",
    global_step_transform=global_step_from_engine(trainer), # helps fetch the trainer's state
)
  
# Save the model after every epoch of val_evaluator is completed
val_evaluator.add_event_handler(Events.COMPLETED, model_checkpoint, {"model": model})

我们将使用 TensorboardLogger() 来分别记录训练器的损失、训练和验证指标。

# Define a Tensorboard logger
tb_logger = TensorboardLogger(log_dir="tb-logger")

# Attach handler to plot trainer's loss every 100 iterations
tb_logger.attach_output_handler(
    trainer,
    event_name=Events.ITERATION_COMPLETED(every=log_interval),
    tag="training",
    output_transform=lambda loss: {"batch_loss": loss},
)

# Attach handler for plotting both evaluators' metrics after every epoch completes
for tag, evaluator in [("training", train_evaluator), ("validation", val_evaluator)]:
    tb_logger.attach_output_handler(
        evaluator,
        event_name=Events.EPOCH_COMPLETED,
        tag=tag,
        metric_names="all",
        global_step_transform=global_step_from_engine(trainer),
    )

最后,我们在训练数据集上启动引擎并运行5个周期:

trainer.run(train_loader, max_epochs=5)
Epoch[1], Iter[100] Loss: 0.19
Epoch[1], Iter[200] Loss: 0.13
Epoch[1], Iter[300] Loss: 0.08
Epoch[1], Iter[400] Loss: 0.11
Training Results - Epoch[1] Avg accuracy: 0.97 Avg loss: 0.09
Validation Results - Epoch[1] Avg accuracy: 0.97 Avg loss: 0.08
Epoch[2], Iter[500] Loss: 0.07
Epoch[2], Iter[600] Loss: 0.04
Epoch[2], Iter[700] Loss: 0.09
Epoch[2], Iter[800] Loss: 0.07
Epoch[2], Iter[900] Loss: 0.16
Training Results - Epoch[2] Avg accuracy: 0.93 Avg loss: 0.20
Validation Results - Epoch[2] Avg accuracy: 0.93 Avg loss: 0.20
Epoch[3], Iter[1000] Loss: 0.02
Epoch[3], Iter[1100] Loss: 0.02
Epoch[3], Iter[1200] Loss: 0.05
Epoch[3], Iter[1300] Loss: 0.06
Epoch[3], Iter[1400] Loss: 0.06
Training Results - Epoch[3] Avg accuracy: 0.94 Avg loss: 0.20
Validation Results - Epoch[3] Avg accuracy: 0.94 Avg loss: 0.23
Epoch[4], Iter[1500] Loss: 0.08
Epoch[4], Iter[1600] Loss: 0.02
Epoch[4], Iter[1700] Loss: 0.08
Epoch[4], Iter[1800] Loss: 0.07
Training Results - Epoch[4] Avg accuracy: 0.98 Avg loss: 0.06
Validation Results - Epoch[4] Avg accuracy: 0.98 Avg loss: 0.07
Epoch[5], Iter[1900] Loss: 0.02
Epoch[5], Iter[2000] Loss: 0.11
Epoch[5], Iter[2100] Loss: 0.05
Epoch[5], Iter[2200] Loss: 0.02
Epoch[5], Iter[2300] Loss: 0.01
Training Results - Epoch[5] Avg accuracy: 0.99 Avg loss: 0.02
Validation Results - Epoch[5] Avg accuracy: 0.99 Avg loss: 0.03





State:
	iteration: 2345
	epoch: 5
	epoch_length: 469
	max_epochs: 5
	output: 0.005351857747882605
	batch: <class 'list'>
	metrics: <class 'dict'>
	dataloader: <class 'torch.utils.data.dataloader.DataLoader'>
	seed: <class 'NoneType'>
	times: <class 'dict'>
# Let's close the logger and inspect our results
tb_logger.close()

%load_ext tensorboard

%tensorboard --logdir=.
# At last we can view our best models
!ls checkpoints
'best_model_4_accuracy=0.9856.pt'  'best_model_5_accuracy=0.9857.pt'

下一步

  1. 如果你想继续学习关于 PyTorch-Ignite 的更多内容,请查看 教程
  2. 前往 如何指南 如果您在寻找 特定的解决方案。
  3. 如果您想建立一个 PyTorch-Ignite 项目,请访问 代码生成器 获取各种易于自定义的模板和开箱即用的功能。

完整代码

import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.models import resnet18
from torchvision.transforms import Compose, Normalize, ToTensor

from ignite.engine import Engine, Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss
from ignite.handlers import ModelCheckpoint
from ignite.contrib.handlers import TensorboardLogger, global_step_from_engine

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
    
        self.model = resnet18(num_classes=10)

        self.model.conv1 = self.model.conv1 = nn.Conv2d(
            1, 64, kernel_size=3, padding=1, bias=False
        )

    def forward(self, x):
        return self.model(x)


model = Net().to(device)

data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])

train_loader = DataLoader(
    MNIST(download=True, root=".", transform=data_transform, train=True), batch_size=128, shuffle=True
)

val_loader = DataLoader(
    MNIST(download=True, root=".", transform=data_transform, train=False), batch_size=256, shuffle=False
)

optimizer = torch.optim.RMSprop(model.parameters(), lr=0.005)
criterion = nn.CrossEntropyLoss()

trainer = create_supervised_trainer(model, optimizer, criterion, device)

val_metrics = {
    "accuracy": Accuracy(),
    "loss": Loss(criterion)
}

train_evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)
val_evaluator = create_supervised_evaluator(model, metrics=val_metrics, device=device)

log_interval = 100

@trainer.on(Events.ITERATION_COMPLETED(every=log_interval))
def log_training_loss(engine):
    print(f"Epoch[{engine.state.epoch}], Iter[{engine.state.iteration}] Loss: {engine.state.output:.2f}")

@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(trainer):
    train_evaluator.run(train_loader)
    metrics = train_evaluator.state.metrics
    print(f"Training Results - Epoch[{trainer.state.epoch}] Avg accuracy: {metrics['accuracy']:.2f} Avg loss: {metrics['loss']:.2f}")


@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(trainer):
    val_evaluator.run(val_loader)
    metrics = val_evaluator.state.metrics
    print(f"Validation Results - Epoch[{trainer.state.epoch}] Avg accuracy: {metrics['accuracy']:.2f} Avg loss: {metrics['loss']:.2f}")


def score_function(engine):
    return engine.state.metrics["accuracy"]


model_checkpoint = ModelCheckpoint(
    "checkpoint",
    n_saved=2,
    filename_prefix="best",
    score_function=score_function,
    score_name="accuracy",
    global_step_transform=global_step_from_engine(trainer),
)
  
val_evaluator.add_event_handler(Events.COMPLETED, model_checkpoint, {"model": model})

tb_logger = TensorboardLogger(log_dir="tb-logger")

tb_logger.attach_output_handler(
    trainer,
    event_name=Events.ITERATION_COMPLETED(every=log_interval),
    tag="training",
    output_transform=lambda loss: {"batch_loss": loss},
)

for tag, evaluator in [("training", train_evaluator), ("validation", val_evaluator)]:
    tb_logger.attach_output_handler(
        evaluator,
        event_name=Events.EPOCH_COMPLETED,
        tag=tag,
        metric_names="all",
        global_step_transform=global_step_from_engine(trainer),
    )

trainer.run(train_loader, max_epochs=5)

tb_logger.close()