多层感知器#
在这个例子中,我们将通过实现一个简单的多层感知器来分类MNIST,学习如何使用mlx.nn。
作为第一步,导入我们需要的MLX包:
import mlx.core as mx
import mlx.nn as nn
import mlx.optimizers as optim
import numpy as np
模型被定义为继承自mlx.nn.Module的MLP类。我们遵循标准习惯来创建一个新模块:
定义一个
__init__,在其中设置参数和/或子模块。有关mlx.nn.Module如何注册参数的更多信息,请参阅Module类文档。定义一个
__call__,在其中实现计算。
class MLP(nn.Module):
def __init__(
self, num_layers: int, input_dim: int, hidden_dim: int, output_dim: int
):
super().__init__()
layer_sizes = [input_dim] + [hidden_dim] * num_layers + [output_dim]
self.layers = [
nn.Linear(idim, odim)
for idim, odim in zip(layer_sizes[:-1], layer_sizes[1:])
]
def __call__(self, x):
for l in self.layers[:-1]:
x = mx.maximum(l(x), 0.0)
return self.layers[-1](x)
我们定义了损失函数,该函数取每个样本交叉熵损失的平均值。mlx.nn.losses子包中实现了一些常用的损失函数。
def loss_fn(model, X, y):
return mx.mean(nn.losses.cross_entropy(model(X), y))
我们还需要一个函数来计算模型在验证集上的准确性:
def eval_fn(model, X, y):
return mx.mean(mx.argmax(model(X), axis=1) == y)
接下来,设置问题参数并加载数据。要加载数据,您需要我们的
mnist数据加载器,我们将导入为mnist。
num_layers = 2
hidden_dim = 32
num_classes = 10
batch_size = 256
num_epochs = 10
learning_rate = 1e-1
# Load the data
import mnist
train_images, train_labels, test_images, test_labels = map(
mx.array, mnist.mnist()
)
由于我们使用的是SGD,我们需要一个迭代器来打乱并构建训练集中的小批量样本:
def batch_iterate(batch_size, X, y):
perm = mx.array(np.random.permutation(y.size))
for s in range(0, y.size, batch_size):
ids = perm[s : s + batch_size]
yield X[ids], y[ids]
最后,我们通过实例化模型、mlx.optimizers.SGD 优化器并运行训练循环来将所有内容整合在一起:
# Load the model
model = MLP(num_layers, train_images.shape[-1], hidden_dim, num_classes)
mx.eval(model.parameters())
# Get a function which gives the loss and gradient of the
# loss with respect to the model's trainable parameters
loss_and_grad_fn = nn.value_and_grad(model, loss_fn)
# Instantiate the optimizer
optimizer = optim.SGD(learning_rate=learning_rate)
for e in range(num_epochs):
for X, y in batch_iterate(batch_size, train_images, train_labels):
loss, grads = loss_and_grad_fn(model, X, y)
# Update the optimizer state and model parameters
# in a single call
optimizer.update(model, grads)
# Force a graph evaluation
mx.eval(model.parameters(), optimizer.state)
accuracy = eval_fn(model, test_images, test_labels)
print(f"Epoch {e}: Test accuracy {accuracy.item():.3f}")
注意
mlx.nn.value_and_grad() 函数是一个便捷函数,用于获取损失相对于模型可训练参数的梯度。这不应与 mlx.core.value_and_grad() 混淆。
模型应该在训练集上经过几次训练后达到不错的准确率(大约95%)。完整的示例可以在MLX GitHub仓库中找到。