Shortcuts

python.builtin

dynamic_shape_round

注意

标签: torch.dynamic-shape, python.builtin

支持级别: 尚未支持

原始源代码:

import torch

from torch.export import Dim

x = torch.ones(3, 2)
dim0_x = Dim("dim0_x")

class DynamicShapeRound(torch.nn.Module):
    """
    不支持对动态形状调用round。
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        return x[: round(x.shape[0] / 2)]

结果:

AssertionError:

tensor_setattr

注意

标签: python.builtin

支持级别:支持

原始源代码:

import torch

结果:

导出的程序:
     GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 4);  arg0_1 = None
            return (add,)

 签名: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=ConstantArgument(value='attr'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
范围 约束: {}

类型反射方法

注意

标签: python.builtin

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 4]"):
                add: "f32[3, 4]" = torch.ops.aten.add.Tensor(arg0_1, 1);  arg0_1 = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
范围约束: {}

你可以将上面的示例重写为如下内容:

class TypeReflectionMethodRewrite(torch.nn.Module):
    """
    自定义对象类方法将被内联。
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        return A.func(x)