Paddle 后端示例:通过图匹配神经网络匹配图像关键点

此示例展示了如何通过基于神经网络的图匹配求解器来匹配图像关键点。 这些图匹配求解器旨在匹配两个单独的图。匹配后的图像 可以进一步传递以处理下游任务。

# Author: Runzhong Wang <runzhong.wang@sjtu.edu.cn>
#         Wenzheng Pan <pwz1121@sjtu.edu.cn>
#
# License: Mulan PSL v2 License

注意

以下求解器基于匹配两个单独的图,并包含在此示例中:

import paddle # paddle backend
from paddle.vision.models import vgg16
import pygmtools as pygm
import matplotlib.pyplot as plt # for plotting
from matplotlib.patches import ConnectionPatch # for plotting matching result
import scipy.io as sio # for loading .mat file
import scipy.spatial as spa # for Delaunay triangulation
from sklearn.decomposition import PCA as PCAdimReduc
import itertools
import numpy as np
from PIL import Image
import warnings
warnings.filterwarnings("ignore")
pygm.set_backend('paddle') # set default backend for pygmtools

paddle.device.set_device('cpu') # paddle sets device globally
Place(cpu)

通过图匹配神经网络预测匹配

在本节中,我们将展示如何通过图匹配神经网络进行预测(推理)。 让我们以PCA-GM(pca_gm())为例。

加载图片

图像来自Willow对象类数据集(该数据集也可在pygmtools的基准测试中使用,参见WillowObject)。

图像被调整为256x256。

obj_resize = (256, 256)
img1 = Image.open('../data/willow_duck_0001.png')
img2 = Image.open('../data/willow_duck_0002.png')
kpts1 = paddle.to_tensor(sio.loadmat('../data/willow_duck_0001.mat')['pts_coord'])
kpts2 = paddle.to_tensor(sio.loadmat('../data/willow_duck_0002.mat')['pts_coord'])
kpts1[0] = kpts1[0] * obj_resize[0] / img1.size[0]
kpts1[1] = kpts1[1] * obj_resize[1] / img1.size[1]
kpts2[0] = kpts2[0] * obj_resize[0] / img2.size[0]
kpts2[1] = kpts2[1] * obj_resize[1] / img2.size[1]
img1 = img1.resize(obj_resize, resample=Image.BILINEAR)
img2 = img2.resize(obj_resize, resample=Image.BILINEAR)
paddle_img1 = paddle.to_tensor(np.array(img1, dtype=np.float32) / 256).transpose((2, 0, 1)).unsqueeze(0) # shape: BxCxHxW
paddle_img2 = paddle.to_tensor(np.array(img2, dtype=np.float32) / 256).transpose((2, 0, 1)).unsqueeze(0) # shape: BxCxHxW

可视化图像和关键点

def plot_image_with_graph(img, kpt, A=None):
    plt.imshow(img)
    plt.scatter(kpt[0], kpt[1], c='w', edgecolors='k')
    if A is not None:
        for idx in paddle.nonzero(A, as_tuple=False):
            plt.plot((kpt[0, idx[0]], kpt[0, idx[1]]), (kpt[1, idx[0]], kpt[1, idx[1]]), 'k-')

plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.title('Image 1')
plot_image_with_graph(img1, kpts1)
plt.subplot(1, 2, 2)
plt.title('Image 2')
plot_image_with_graph(img2, kpts2)
Image 1, Image 2

构建图表

图结构是基于关键点集的几何结构构建的。在这个例子中,我们参考了Delaunay三角剖分

def delaunay_triangulation(kpt):
    d = spa.Delaunay(kpt.numpy().transpose())
    A = paddle.zeros((len(kpt[0]), len(kpt[0])))
    for simplex in d.simplices:
        for pair in itertools.permutations(simplex, 2):
            A[pair] = 1
    return A

A1 = delaunay_triangulation(kpts1)
A2 = delaunay_triangulation(kpts2)

可视化图表

plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.title('Image 1 with Graphs')
plot_image_with_graph(img1, kpts1, A1)
plt.subplot(1, 2, 2)
plt.title('Image 2 with Graphs')
plot_image_with_graph(img2, kpts2, A2)
Image 1 with Graphs, Image 2 with Graphs

通过CNN提取节点特征

深度图匹配求解器可以与CNN特征提取器融合,以构建端到端的学习管道。

在这个例子中,我们采用基于匹配两个独立图的深度图求解器。 图像特征基于VGG16 CNN模型的两个中间层,遵循现有的深度图匹配论文(如pca_gm()

首先,我们获取VGG16模型:

vgg16_cnn = vgg16(batch_norm=True) # vgg16_bn

VGG16的层列表:

print(vgg16_cnn.features)
Sequential(
  (0): Conv2D(3, 64, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (1): BatchNorm2D(num_features=64, momentum=0.9, epsilon=1e-05)
  (2): ReLU()
  (3): Conv2D(64, 64, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (4): BatchNorm2D(num_features=64, momentum=0.9, epsilon=1e-05)
  (5): ReLU()
  (6): MaxPool2D(kernel_size=2, stride=2, padding=0)
  (7): Conv2D(64, 128, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (8): BatchNorm2D(num_features=128, momentum=0.9, epsilon=1e-05)
  (9): ReLU()
  (10): Conv2D(128, 128, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (11): BatchNorm2D(num_features=128, momentum=0.9, epsilon=1e-05)
  (12): ReLU()
  (13): MaxPool2D(kernel_size=2, stride=2, padding=0)
  (14): Conv2D(128, 256, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (15): BatchNorm2D(num_features=256, momentum=0.9, epsilon=1e-05)
  (16): ReLU()
  (17): Conv2D(256, 256, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (18): BatchNorm2D(num_features=256, momentum=0.9, epsilon=1e-05)
  (19): ReLU()
  (20): Conv2D(256, 256, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (21): BatchNorm2D(num_features=256, momentum=0.9, epsilon=1e-05)
  (22): ReLU()
  (23): MaxPool2D(kernel_size=2, stride=2, padding=0)
  (24): Conv2D(256, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (25): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
  (26): ReLU()
  (27): Conv2D(512, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (28): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
  (29): ReLU()
  (30): Conv2D(512, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (31): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
  (32): ReLU()
  (33): MaxPool2D(kernel_size=2, stride=2, padding=0)
  (34): Conv2D(512, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (35): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
  (36): ReLU()
  (37): Conv2D(512, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (38): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
  (39): ReLU()
  (40): Conv2D(512, 512, kernel_size=[3, 3], padding=1, data_format=NCHW)
  (41): BatchNorm2D(num_features=512, momentum=0.9, epsilon=1e-05)
  (42): ReLU()
  (43): MaxPool2D(kernel_size=2, stride=2, padding=0)
)

让我们定义CNN特征提取器,它输出layer (30)layer (37)的特征

class CNNNet(paddle.nn.Layer):
    def __init__(self, vgg16_module):
        super(CNNNet, self).__init__()
        # The naming of the layers follow ThinkMatch convention to load pretrained models.
        self.node_layers = paddle.nn.Sequential(*[_ for _ in vgg16_module.features[:31]])
        self.edge_layers = paddle.nn.Sequential(*[_ for _ in vgg16_module.features[31:38]])

    def forward(self, inp_img):
        feat_local = self.node_layers(inp_img)
        feat_global = self.edge_layers(feat_local)
        return feat_local, feat_global

下载预训练的CNN权重(来自ThinkMatch),加载权重然后提取CNN特征

cnn = CNNNet(vgg16_cnn)
path = pygm.utils.download('vgg16_pca_voc_paddle.pdparams', 'https://drive.google.com/u/0/uc?export=download&confirm=Z-AR&id=1rIb_fPx20a4Q1GGlUsF8lAY1XNCyGO6L')
cnn.set_dict(paddle.load(path))
with paddle.set_grad_enabled(False):
    feat1_local, feat1_global = cnn(paddle_img1)
    feat2_local, feat2_global = cnn(paddle_img2)

标准化特征

def l2norm(node_feat):
    return paddle.nn.functional.local_response_norm(
        node_feat, node_feat.shape[1] * 2, alpha=node_feat.shape[1] * 2, beta=0.5, k=0)

feat1_local = l2norm(feat1_local)
feat1_global = l2norm(feat1_global)
feat2_local = l2norm(feat2_local)
feat2_global = l2norm(feat2_global)

将特征上采样到原始图像大小并连接

feat1_local_upsample = paddle.nn.functional.interpolate(feat1_local, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat1_global_upsample = paddle.nn.functional.interpolate(feat1_global, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat2_local_upsample = paddle.nn.functional.interpolate(feat2_local, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat2_global_upsample = paddle.nn.functional.interpolate(feat2_global, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat1_upsample = paddle.concat((feat1_local_upsample, feat1_global_upsample), axis=1)
feat2_upsample = paddle.concat((feat2_local_upsample, feat2_global_upsample), axis=1)
num_features = feat1_upsample.shape[1]

可视化提取的CNN特征(通过主成分分析进行降维)

pca_dim_reduc = PCAdimReduc(n_components=3, whiten=True)
feat_dim_reduc = pca_dim_reduc.fit_transform(
    np.concatenate((
        feat1_upsample.transpose((0, 2, 3, 1)).reshape((-1, num_features)).numpy(),
        feat2_upsample.transpose((0, 2, 3, 1)).reshape((-1, num_features)).numpy()
    ), axis=0)
)
feat_dim_reduc = feat_dim_reduc / np.max(np.abs(feat_dim_reduc), axis=0, keepdims=True) / 2 + 0.5
feat1_dim_reduc = feat_dim_reduc[:obj_resize[0] * obj_resize[1], :]
feat2_dim_reduc = feat_dim_reduc[obj_resize[0] * obj_resize[1]:, :]

plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.title('Image 1 with CNN features')
plot_image_with_graph(img1, kpts1, A1)
plt.imshow(feat1_dim_reduc.reshape((obj_resize[1], obj_resize[0], 3)), alpha=0.5)
plt.subplot(1, 2, 2)
plt.title('Image 2 with CNN features')
plot_image_with_graph(img2, kpts2, A2)
plt.imshow(feat2_dim_reduc.reshape((obj_resize[1], obj_resize[0], 3)), alpha=0.5)
Image 1 with CNN features, Image 2 with CNN features
<matplotlib.image.AxesImage object at 0x7f4fdeebf220>

通过最近邻插值提取节点特征

rounded_kpts1 = paddle.cast(paddle.round(kpts1), dtype='int64')
rounded_kpts2 = paddle.cast(paddle.round(kpts2), dtype='int64')

node1 = feat1_upsample.transpose((2, 3, 0, 1))[rounded_kpts1[1], rounded_kpts1[0]][:, 0]
node2 = feat2_upsample.transpose((2, 3, 0, 1))[rounded_kpts2[1], rounded_kpts2[0]][:, 0]

调用PCA-GM匹配模型

请参阅pca_gm()以获取API参考。

X = pygm.pca_gm(node1, node2, A1, A2, pretrain='voc')
X = pygm.hungarian(X)

plt.figure(figsize=(8, 4))
plt.suptitle('Image Matching Result by PCA-GM')
ax1 = plt.subplot(1, 2, 1)
plot_image_with_graph(img1, kpts1, A1)
ax2 = plt.subplot(1, 2, 2)
plot_image_with_graph(img2, kpts2, A2)
for i in range(X.shape[0]):
    j = paddle.argmax(X[i]).item()
    con = ConnectionPatch(xyA=kpts1[:, i], xyB=kpts2[:, j], coordsA="data", coordsB="data",
                          axesA=ax1, axesB=ax2, color="red" if i != j else "green")
    plt.gca().add_artist(con)
Image Matching Result by PCA-GM

使用其他神经网络匹配图像

上述管道也适用于其他深度图匹配网络。这里我们给出ipca_gm()cie()的示例。

通过IPCA-GM模型进行匹配

请参阅ipca_gm()以获取API参考。

path = pygm.utils.download('vgg16_ipca_voc_paddle.pdparams', 'https://drive.google.com/u/0/uc?export=download&confirm=Z-AR&id=1h_VEmlfMAeBszoR0DvMr6EPXdNVTfTgf')
cnn.set_dict(paddle.load(path))

with paddle.set_grad_enabled(False):
    feat1_local, feat1_global = cnn(paddle_img1)
    feat2_local, feat2_global = cnn(paddle_img2)

标准化特征

def l2norm(node_feat):
    return paddle.nn.functional.local_response_norm(
        node_feat, node_feat.shape[1] * 2, alpha=node_feat.shape[1] * 2, beta=0.5, k=0)

feat1_local = l2norm(feat1_local)
feat1_global = l2norm(feat1_global)
feat2_local = l2norm(feat2_local)
feat2_global = l2norm(feat2_global)

将特征上采样到原始图像大小并连接

feat1_local_upsample = paddle.nn.functional.interpolate(feat1_local, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat1_global_upsample = paddle.nn.functional.interpolate(feat1_global, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat2_local_upsample = paddle.nn.functional.interpolate(feat2_local, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat2_global_upsample = paddle.nn.functional.interpolate(feat2_global, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat1_upsample = paddle.concat((feat1_local_upsample, feat1_global_upsample), axis=1)
feat2_upsample = paddle.concat((feat2_local_upsample, feat2_global_upsample), axis=1)
num_features = feat1_upsample.shape[1]

通过最近邻插值提取节点特征

rounded_kpts1 = paddle.cast(paddle.round(kpts1), dtype='int64')
rounded_kpts2 = paddle.cast(paddle.round(kpts2), dtype='int64')

node1 = feat1_upsample.transpose((2, 3, 0, 1))[rounded_kpts1[1], rounded_kpts1[0]][:, 0]
node2 = feat2_upsample.transpose((2, 3, 0, 1))[rounded_kpts2[1], rounded_kpts2[0]][:, 0]

将边缘特征构建为边缘长度

kpts1_dis = (kpts1.unsqueeze(0) - kpts1.unsqueeze(1))
kpts1_dis = paddle.norm(kpts1_dis, p=2, axis=2).detach()
kpts2_dis = (kpts2.unsqueeze(0) - kpts2.unsqueeze(1))
kpts2_dis = paddle.norm(kpts2_dis, p=2, axis=2).detach()

Q1 = paddle.exp(-kpts1_dis / obj_resize[0])
Q2 = paddle.exp(-kpts2_dis / obj_resize[0])

通过IPCA-GM模型进行匹配

X = pygm.ipca_gm(node1, node2, A1, A2, pretrain='voc')
X = pygm.hungarian(X)

plt.figure(figsize=(8, 4))
plt.suptitle('Image Matching Result by IPCA-GM')
ax1 = plt.subplot(1, 2, 1)
plot_image_with_graph(img1, kpts1, A1)
ax2 = plt.subplot(1, 2, 2)
plot_image_with_graph(img2, kpts2, A2)
for i in range(X.shape[0]):
    j = paddle.argmax(X[i]).item()
    con = ConnectionPatch(xyA=kpts1[:, i], xyB=kpts2[:, j], coordsA="data", coordsB="data",
                          axesA=ax1, axesB=ax2, color="red" if i != j else "green")
    plt.gca().add_artist(con)
Image Matching Result by IPCA-GM

通过CIE模型进行匹配

请参阅cie()的API参考。

path = pygm.utils.download('vgg16_cie_voc_paddle.pdparams', 'https://drive.google.com/u/0/uc?export=download&confirm=Z-AR&id=18MwP3nuMkqDiiwRd_y6rlFmtjKi9THb-')
cnn.set_dict(paddle.load(path))

with paddle.set_grad_enabled(False):
    feat1_local, feat1_global = cnn(paddle_img1)
    feat2_local, feat2_global = cnn(paddle_img2)

标准化特征

def l2norm(node_feat):
    return paddle.nn.functional.local_response_norm(
        node_feat, node_feat.shape[1] * 2, alpha=node_feat.shape[1] * 2, beta=0.5, k=0)

feat1_local = l2norm(feat1_local)
feat1_global = l2norm(feat1_global)
feat2_local = l2norm(feat2_local)
feat2_global = l2norm(feat2_global)

将特征上采样到原始图像大小并连接

feat1_local_upsample = paddle.nn.functional.interpolate(feat1_local, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat1_global_upsample = paddle.nn.functional.interpolate(feat1_global, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat2_local_upsample = paddle.nn.functional.interpolate(feat2_local, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat2_global_upsample = paddle.nn.functional.interpolate(feat2_global, (obj_resize[1], obj_resize[0]), mode='bilinear')
feat1_upsample = paddle.concat((feat1_local_upsample, feat1_global_upsample), axis=1)
feat2_upsample = paddle.concat((feat2_local_upsample, feat2_global_upsample), axis=1)
num_features = feat1_upsample.shape[1]

通过最近邻插值提取节点特征

rounded_kpts1 = paddle.cast(paddle.round(kpts1), dtype='int64')
rounded_kpts2 = paddle.cast(paddle.round(kpts2), dtype='int64')

node1 = feat1_upsample.transpose((2, 3, 0, 1))[rounded_kpts1[1], rounded_kpts1[0]][:, 0]
node2 = feat2_upsample.transpose((2, 3, 0, 1))[rounded_kpts2[1], rounded_kpts2[0]][:, 0]

构建边缘特征作为边缘长度

kpts1_dis = (kpts1.unsqueeze(1) - kpts1.unsqueeze(2))
kpts1_dis = paddle.norm(kpts1_dis, p=2, axis=0).detach()
kpts2_dis = (kpts2.unsqueeze(1) - kpts2.unsqueeze(2))
kpts2_dis = paddle.norm(kpts2_dis, p=2, axis=0).detach()

Q1 = paddle.exp(-kpts1_dis / obj_resize[0]).unsqueeze(-1).cast('float32')
Q2 = paddle.exp(-kpts2_dis / obj_resize[0]).unsqueeze(-1).cast('float32')

调用CIE匹配模型

X = pygm.cie(node1, node2, A1, A2, Q1, Q2, pretrain='voc')
X = pygm.hungarian(X)

plt.figure(figsize=(8, 4))
plt.suptitle('Image Matching Result by CIE')
ax1 = plt.subplot(1, 2, 1)
plot_image_with_graph(img1, kpts1, A1)
ax2 = plt.subplot(1, 2, 2)
plot_image_with_graph(img2, kpts2, A2)
for i in range(X.shape[0]):
    j = paddle.argmax(X[i]).item()
    con = ConnectionPatch(xyA=kpts1[:, i], xyB=kpts2[:, j], coordsA="data", coordsB="data",
                          axesA=ax1, axesB=ax2, color="red" if i != j else "green")
    plt.gca().add_artist(con)
Image Matching Result by CIE

训练深度图匹配模型

在本节中,我们展示了如何构建一个支持端到端训练的深度图匹配模型。 对于这里考虑的图像匹配问题,该模型由CNN特征提取器和 一个可学习的匹配模块组成。以PCA-GM模型为例。

注意

这个简单的示例旨在向您展示如何在训练端到端深度图匹配神经网络时进行基本的前向和后向传递。一个“更正式”的深度学习管道应该包括异步数据加载器、批处理操作、CUDA支持等,这些都在考虑简化的情况下被省略了。您可以参考ThinkMatch,这是一个包含所有这些高级功能的研究协议。

首先让我们定义神经网络模型。通过调用get_network(),它将简单地返回网络对象。

class GMNet(paddle.nn.Layer):
    def __init__(self):
        super(GMNet, self).__init__()
        self.gm_net = pygm.utils.get_network(pygm.pca_gm, pretrain=False) # fetch the network object
        self.cnn = CNNNet(vgg16_cnn)

    def forward(self, img1, img2, kpts1, kpts2, A1, A2):
        # CNN feature extractor layers
        feat1_local, feat1_global = self.cnn(img1)
        feat2_local, feat2_global = self.cnn(img2)
        feat1_local = l2norm(feat1_local)
        feat1_global = l2norm(feat1_global)
        feat2_local = l2norm(feat2_local)
        feat2_global = l2norm(feat2_global)

        # upsample feature map
        feat1_local_upsample = paddle.nn.functional.interpolate(feat1_local, (obj_resize[1], obj_resize[0]), mode='bilinear')
        feat1_global_upsample = paddle.nn.functional.interpolate(feat1_global, (obj_resize[1], obj_resize[0]), mode='bilinear')
        feat2_local_upsample = paddle.nn.functional.interpolate(feat2_local, (obj_resize[1], obj_resize[0]), mode='bilinear')
        feat2_global_upsample = paddle.nn.functional.interpolate(feat2_global, (obj_resize[1], obj_resize[0]), mode='bilinear')
        feat1_upsample = paddle.concat((feat1_local_upsample, feat1_global_upsample), axis=1)
        feat2_upsample = paddle.concat((feat2_local_upsample, feat2_global_upsample), axis=1)

        # assign node features
        rounded_kpts1 = paddle.cast(paddle.round(kpts1), dtype='int64')
        rounded_kpts2 = paddle.cast(paddle.round(kpts2), dtype='int64')
        node1 = feat1_upsample.transpose((2, 3, 0, 1))[rounded_kpts1[1], rounded_kpts1[0]][:, 0]
        node2 = feat2_upsample.transpose((2, 3, 0, 1))[rounded_kpts2[1], rounded_kpts2[0]][:, 0]

        # PCA-GM matching layers
        X = pygm.pca_gm(node1, node2, A1, A2, network=self.gm_net) # the network object is reused
        return X

model = GMNet()

定义优化器

optim = paddle.optimizer.Adam(parameters=model.parameters(), learning_rate=1e-3)

前向传播

X = model(paddle_img1, paddle_img2, kpts1, kpts2, A1, A2)

计算损失

在这个例子中,真实匹配矩阵是一个对角矩阵。我们通过permutation_loss()计算损失函数。

X_gt = paddle.eye(X.shape[0])
loss = pygm.utils.permutation_loss(X, X_gt)
print(f'loss={loss.item():.4f}')
loss=3.0636

反向传播

loss.backward()

可视化梯度

plt.figure(figsize=(4, 4))
plt.title('Gradient Sizes of PCA-GM and VGG16 layers')
plt.gca().set_xlabel('Layer Index')
plt.gca().set_ylabel('Average Gradient Size')
grad_size = []
for param in model.parameters():
    if param.grad is not None:
        grad_size.append(paddle.abs(param.grad).mean().item())
print(grad_size)
plt.stem(grad_size)
Gradient Sizes of PCA-GM and VGG16 layers
[0.00017377809854224324, 0.004970689304172993, 0.00020029922598041594, 0.004491607192903757, 0.00024025217862799764, 0.008655954152345657, 8.057090781221632e-06, 3.785791705013253e-05, 0.00010892890713876113, 0.008705638349056244, 0.00013127276906743646, 0.004451324697583914, 0.0004708365013357252, 5.106092437756615e-09, 0.0010730226058512926, 0.0005805740365758538, 0.00015507572970818728, 5.179759732243383e-09, 0.0023203622549772263, 0.0011218125000596046, 0.00023560502449981868, 1.688948758626907e-09, 0.0014106354210525751, 0.0010775862028822303, 0.00020390216377563775, 3.170607776326051e-09, 0.0018936353735625744, 0.0009038165444508195, 0.0002002984838327393, 6.506531979866281e-10, 0.0016013570129871368, 0.0010397139703854918, 0.00016521918587386608, 1.042045005839043e-09, 0.0017121561104431748, 0.0010783353354781866, 0.0001705087342998013, 1.1603373817337115e-09, 0.002016323385760188, 0.000996441813185811, 0.00014709813694935292, 3.410823845584332e-10, 0.001613637781701982, 0.0009654393652454019, 0.0001112126701627858, 4.714198476030163e-10, 0.0018421441782265902, 0.00108684366568923, 0.00010593782644718885, 0.0007217867532745004, 0.0014904793351888657, 0.0007666748133487999, 7.928263221401721e-05, 1.8203083484991112e-10, 0.0013098949566483498, 0.0008494521607644856, 7.790946983732283e-05, 0.0012493234826251864]

<StemContainer object of 3 artists>

更新模型参数。深度学习管道应迭代前向传播和后向传播步骤,直到收敛。

optim.step()
optim.clear_grad()

注意

此示例支持GPU和CPU,在线文档是由仅支持CPU的机器构建的。 如果您在GPU上运行此代码,效率将显著提高。

脚本总运行时间: (0 分钟 44.512 秒)

Gallery generated by Sphinx-Gallery