PyTorch-Ignite PyTorch-Ignite

如何在Ignite中进行交叉验证

本操作指南展示了如何使用PyTorch-Ignite进行k折交叉验证并保存最佳结果。

交叉验证对于调整模型参数或在可用数据不足以正确测试时非常有用

在这个例子中,我们将使用一个 ResNet18模型在 MNIST数据集上。基础代码与 入门指南中使用的相同。

!pip install pytorch-ignite
Collecting pytorch-ignite
  Downloading pytorch_ignite-0.4.6-py3-none-any.whl (232 kB)
     |████████████████████████████████| 232 kB 13.3 MB/s eta 0:00:01
[?25hRequirement already satisfied: torch<2,>=1.3 in /usr/local/lib/python3.7/dist-packages (from pytorch-ignite) (1.9.0+cu102)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch<2,>=1.3->pytorch-ignite) (3.7.4.3)
Installing collected packages: pytorch-ignite
Successfully installed pytorch-ignite-0.4.6

基本设置

除了常用的库,我们还将使用 scikit-learn 库,该库包含许多学习算法。在这里,我们将使用 KFold 类

import torch
import torch.nn as nn
from torch.utils.data import DataLoader, SubsetRandomSampler, ConcatDataset
from torchvision.datasets import MNIST
from torchvision.models import resnet18
from torchvision.transforms import Compose, Normalize, ToTensor

from sklearn.model_selection import KFold
import numpy as np

from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss, RunningAverage
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()

        self.model = resnet18(num_classes=10)
        self.model.conv1 = nn.Conv2d(
            1, 64, kernel_size=3, padding=1, bias=False
        )

    def forward(self, x):
        return self.model(x)


data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])

train_dataset = MNIST(download=True, root=".", transform=data_transform, train=True)
test_dataset = MNIST(download=True, root=".", transform=data_transform, train=False)
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./MNIST/raw/train-images-idx3-ubyte.gz



  0%|          | 0/9912422 [00:00<?, ?it/s]


Extracting ./MNIST/raw/train-images-idx3-ubyte.gz to ./MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ./MNIST/raw/train-labels-idx1-ubyte.gz



  0%|          | 0/28881 [00:00<?, ?it/s]


Extracting ./MNIST/raw/train-labels-idx1-ubyte.gz to ./MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ./MNIST/raw/t10k-images-idx3-ubyte.gz



  0%|          | 0/1648877 [00:00<?, ?it/s]


Extracting ./MNIST/raw/t10k-images-idx3-ubyte.gz to ./MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ./MNIST/raw/t10k-labels-idx1-ubyte.gz



  0%|          | 0/4542 [00:00<?, ?it/s]


Extracting ./MNIST/raw/t10k-labels-idx1-ubyte.gz to ./MNIST/raw



/usr/local/lib/python3.7/dist-packages/torchvision/datasets/mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /pytorch/torch/csrc/utils/tensor_numpy.cpp:180.)
  return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)
def initialize():
    model = Net().to(device)
    optimizer = torch.optim.RMSprop(model.parameters(), lr=1e-06)
    criterion = nn.CrossEntropyLoss()

    return model, optimizer, criterion

使用k折进行训练

为了能够使用 KFold 来训练模型,我们必须将数据分成 k 个样本。我们将使用一个 map-style data loader,这样我们就可以通过索引访问数据集。在这里,我们使用 SubsetRandomSamplerKFold 提供的索引中随机抽取数据元素。

正如我们在下面看到的,SubsetRandomSampler根据train_idxval_idx生成数据索引列表,这些值由KFold类提供。然后,这些索引列表用于构建训练和验证数据样本。

def setup_dataflow(dataset, train_idx, val_idx):
    train_sampler = SubsetRandomSampler(train_idx)
    val_sampler = SubsetRandomSampler(val_idx)

    train_loader = DataLoader(dataset, batch_size=128, sampler=train_sampler)
    val_loader = DataLoader(dataset, batch_size=256, sampler=val_sampler)

    return train_loader, val_loader

训练过程将运行三个周期。对于每个周期,我们计算 Accuracy 和平均 Loss 作为指标。

在每个epoch结束时,我们会将这些指标存储在train_resultsval_results中,以便稍后评估训练进度。

def train_model(train_loader, val_loader):
    max_epochs = 3

    train_results = []
    val_results = []

    model, optimizer, criterion = initialize()

    trainer = create_supervised_trainer(model, optimizer, criterion, device=device)
    evaluator = create_supervised_evaluator(model, metrics={"Accuracy": Accuracy(), "Loss": Loss(criterion)}, device=device)

    @trainer.on(Events.EPOCH_COMPLETED)
    def log_training_results(trainer):
        evaluator.run(train_loader)
        metrics = evaluator.state.metrics
        train_results.append(metrics)
        print(f"Training Results - Epoch[{trainer.state.epoch}] Avg accuracy: {metrics['Accuracy']:.2f} Avg loss: {metrics['Loss']:.2f}")


    @trainer.on(Events.EPOCH_COMPLETED)
    def log_validation_results(trainer):
        evaluator.run(val_loader)
        metrics = evaluator.state.metrics
        val_results.append(metrics)

    trainer.run(train_loader, max_epochs=max_epochs) 

    return train_results, val_results

让我们将两个数据集连接起来,以便稍后可以将它们分成折叠。

dataset = ConcatDataset([train_dataset, test_dataset])

我们将数据集分成三份用于训练,相应地,三份用于验证。

num_folds = 3
splits = KFold(n_splits=num_folds,shuffle=True,random_state=42)

我们将使用上面创建的折叠来训练模型,并将存储训练方法为每个折叠返回的指标。

results_per_fold = []

for fold_idx, (train_idx,val_idx) in enumerate(splits.split(np.arange(len(dataset)))):

    print('Fold {}'.format(fold_idx + 1))

    train_loader, val_loader = setup_dataflow(dataset, train_idx, val_idx)
    train_results, val_results = train_model(train_loader, val_loader)
    results_per_fold.append([train_results, val_results])
Fold 1
Training Results - Epoch[1] Avg accuracy: 0.73 Avg loss: 1.38
Training Results - Epoch[2] Avg accuracy: 0.84 Avg loss: 0.90
Training Results - Epoch[3] Avg accuracy: 0.89 Avg loss: 0.61
Fold 2
Training Results - Epoch[1] Avg accuracy: 0.74 Avg loss: 1.35
Training Results - Epoch[2] Avg accuracy: 0.85 Avg loss: 0.86

评估

在训练模型后,可以评估其整体性能。

对于每一个折叠,我们将在第3个时期(current_fold[1][2])获得验证步骤(current_fold[1])的准确率分数(current_fold[1][2]["Accuracy"]),这是我们训练的最后一步。

最后,我们计算了每个折叠的验证准确率得分的平均值。这将是我们使用k折技术训练的模型的最终指标。

acc_sum = 0
for n_fold in range(len(results_per_fold)):
  current_fold = results_per_fold[n_fold]
  print(f"Validation Results - Fold[{n_fold + 1}] Avg accuracy: {current_fold[1][2]['Accuracy']:.2f} Avg loss: {current_fold[1][2]['Loss']:.2f}")
  acc_sum += current_fold[1][2]['Accuracy']

folds_mean = acc_sum/num_folds
print(f"Model validation average for {num_folds}-folds: {folds_mean :.2f}")
Validation Results - Epoch[1] Avg accuracy: 0.89 Avg loss: 0.61
Validation Results - Epoch[2] Avg accuracy: 0.90 Avg loss: 0.57
Validation Results - Epoch[3] Avg accuracy: 0.89 Avg loss: 0.57
Model validation average for 3-folds: 0.89