Shortcuts

导出数据库

ExportDB 是一个集中化的支持和不支持导出案例的数据集。它面向那些希望具体了解哪些类型的代码是受支持的、导出的细微差别以及如何修改现有代码以使其与导出兼容的用户。请注意,这并不是 exportdb 支持的所有内容的详尽集合,但它涵盖了用户最常见和最令人困惑的使用案例。

如果您有一个功能,您认为需要我们提供更强的保证来支持导出,请在 pytorch/pytorch 仓库中创建一个带有 module:export 标签的问题。

支持的

假设常量结果

注意

标签: torch.escape-hatch

支持级别:已支持

原始源代码:

import torch
import torch._dynamo as torchdynamo

结果:

导出的程序:
     GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1: "i64[]"):
                slice_1: "f32[3, 2]" = torch.ops.aten.slice.Tensor(arg0_1, 0, 0, 4);  arg0_1 = None
            return (slice_1,)

 签名: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='slice_1'), target=None)])
范围 约束: {}

autograd_function

注意

标签:

支持级别:支持

原始源代码:

```html
import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
                clone: "f32[3, 2]" = torch.ops.aten.clone.default(arg0_1);  arg0_1 = None
            return (clone,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='clone'), target=None)])
范围约束: {}

类方法

注意

标签:

支持级别:支持

原始源代码:

```html
import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[2, 4]", arg1_1: "f32[2]", arg2_1: "f32[3, 4]"):
                t: "f32[4, 2]" = torch.ops.aten.t.default(arg0_1);  arg0_1 = None
            addmm: "f32[3, 2]" = torch.ops.aten.addmm.default(arg1_1, arg2_1, t);  arg1_1 = arg2_1 = t = None

                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(addmm, 1)
            add_1: "f32[3, 2]" = torch.ops.aten.add.Tensor(addmm, 1)

                mul: "f32[3, 2]" = torch.ops.aten.mul.Tensor(add, add_1);  add = add_1 = None

                add_2: "f32[3, 2]" = torch.ops.aten.add.Tensor(addmm, 1);  addmm = None

                mul_1: "f32[3, 2]" = torch.ops.aten.mul.Tensor(mul, add_2);  mul = add_2 = None
            return (mul_1,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='arg0_1'), target='linear.weight', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='arg1_1'), target='linear.bias', persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg2_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='mul_1'), target=None)])
Range constraints: {}

条件分支类方法

注意

标签: torch.cond, torch.dynamic-shape

支持级别:支持

原始源代码:

import torch

from functorch.experimental.control_flow import cond


class MySubModule(torch.nn.Module):
    def foo(self, x):
        return x.cos()

    def forward(self, x):
        return self.foo(x)


class CondBranchClassMethod(torch.nn.Module):
    """
    传递给 cond() 的分支函数 (`true_fn` 和 `false_fn`) 必须遵循以下规则:
      - 两个分支必须接受相同的参数,这些参数也必须与传递给 cond 的分支参数匹配。
      - 两个分支都必须返回一个张量
      - 返回的张量必须具有相同的张量元数据,例如形状和数据类型
      - 分支函数可以是自由函数、嵌套函数、lambda、类方法
      - 分支函数不能有闭包变量
      - 不能对输入或全局变量进行原地修改


    此示例演示了在 cond() 中使用类方法。

    注意:如果 `pred` 在批量大小 < 2 的维度上进行测试,它将被特化。
    """

    def __init__(self):
        super().__init__()
        self.subm = MySubModule()

    def bar(self, x):
        return x.sin()

    def forward(self, x):
        return cond(x.shape[0] <= 2, self.subm.forward, self.bar, [x])

结果:

导出的程序:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3]"):
                true_graph_0 = self.true_graph_0
            false_graph_0 = self.false_graph_0
            conditional = torch.ops.higher_order.cond(False, true_graph_0, false_graph_0, [arg0_1]);  true_graph_0 = false_graph_0 = arg0_1 = None
            getitem: "f32[3]" = conditional[0];  conditional = None
            return (getitem,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[3]"):
                        cos: "f32[3]" = torch.ops.aten.cos.default(arg0_1);  arg0_1 = None
                return (cos,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[3]"):
                        sin: "f32[3]" = torch.ops.aten.sin.default(arg0_1);  arg0_1 = None
                return (sin,)

图的签名: ExportGraphSignature(input_specs=[InputSpec(kind=, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=, arg=TensorArgument(name='getitem'), target=None)])
范围约束: {}

条件分支嵌套函数

注意

标签: torch.cond, torch.dynamic-shape

支持级别:支持

原始源代码:

import torch

from functorch.experimental.control_flow import cond


class CondBranchNestedFunction(torch.nn.Module):
    """
    传递给 cond() 的分支函数 (`true_fn` 和 `false_fn`) 必须遵循以下规则:
      - 两个分支必须接受相同的参数,这些参数也必须与传递给 cond 的分支参数匹配。
      - 两个分支都必须返回一个张量
      - 返回的张量必须具有相同的张量元数据,例如形状和数据类型
      - 分支函数可以是自由函数、嵌套函数、lambda、类方法
      - 分支函数不能有闭包变量
      - 不能对输入或全局变量进行原地修改

    这个示例演示了在 cond() 中使用嵌套函数。

    注意:如果 `pred` 在批次大小 < 2 的维度上进行测试,它将被特化。
    """
    def __init__(self):
        super().__init__()

    def forward(self, x):
        def true_fn(x):
            def inner_true_fn(y):
                return x + y

            return inner_true_fn(x)

        def false_fn(x):
            def inner_false_fn(y):
                return x - y

            return inner_false_fn(x)

        return cond(x.shape[0] < 10, true_fn, false_fn, [x])

结果:

导出的程序:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3]"):
                真图_0 = self.true_graph_0
            假图_0 = self.false_graph_0
            条件 = torch.ops.higher_order.cond(True, 真图_0, 假图_0, [arg0_1]);  真图_0 = 假图_0 = arg0_1 = None
            获取项: "f32[3]" = 条件[0];  条件 = None
            return (获取项,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[3]"):
                        加法: "f32[3]" = torch.ops.aten.add.Tensor(arg0_1, arg0_1);  arg0_1 = None
                return (加法,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[3]"):
                        减法: "f32[3]" = torch.ops.aten.sub.Tensor(arg0_1, arg0_1);  arg0_1 = None
                return (减法,)

图签名: ExportGraphSignature(输入规格=[输入规格(种类=<输入种类.用户输入: 1>, 参数=张量参数(名称='arg0_1'), 目标=None, 持久=None)], 输出规格=[输出规格(种类=<输出种类.用户输出: 1>, 参数=张量参数(名称='获取项'), 目标=None)])
范围约束: {}

条件分支非局部变量

注意

标签: torch.cond, torch.dynamic-shape

支持级别:支持

原始源代码:

import torch

from functorch.experimental.control_flow import cond


class CondBranchNonlocalVariables(torch.nn.Module):
    """
    传递给 cond() 的分支函数 (`true_fn` 和 `false_fn`) 必须遵循以下规则:
    - 两个分支必须接受相同的参数,这些参数也必须与传递给 cond 的分支参数匹配。
    - 两个分支都必须返回一个张量
    - 返回的张量必须具有相同的张量元数据,例如形状和数据类型
    - 分支函数可以是自由函数、嵌套函数、lambda、类方法
    - 分支函数不能有闭包变量
    - 不能对输入或全局变量进行原地修改

    此示例演示了如何重写代码以避免在分支函数中捕获闭包变量。

    下面的代码将无法工作,因为不支持捕获闭包变量。
    ```
    my_tensor_var = x + 100
    my_primitive_var = 3.14

    def true_fn(y):
        nonlocal my_tensor_var, my_primitive_var
        return y + my_tensor_var + my_primitive_var

    def false_fn(y):
        nonlocal my_tensor_var, my_primitive_var
        return y - my_tensor_var - my_primitive_var

    return cond(x.shape[0] > 5, true_fn, false_fn, [x])
    ```

    注意:如果 `pred` 在批次大小 < 2 的维度上进行测试,它将被特化。
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        my_tensor_var = x + 100
        my_primitive_var = 3.14

        def true_fn(x, y, z):
            return x + y + z

        def false_fn(x, y, z):
            return x - y - z

        return cond(
            x.shape[0] > 5,
            true_fn,
            false_fn,
            [x, my_tensor_var, torch.tensor(my_primitive_var)],
        )

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, _lifted_tensor_constant0: "f32[]", arg0_1: "f32[6]"):
                add: "f32[6]" = torch.ops.aten.add.Tensor(arg0_1, 100)

                lift_fresh_copy: "f32[]" = torch.ops.aten.lift_fresh_copy.default(_lifted_tensor_constant0);  _lifted_tensor_constant0 = None

                true_graph_0 = self.true_graph_0
            false_graph_0 = self.false_graph_0
            conditional = torch.ops.higher_order.cond(True, true_graph_0, false_graph_0, [arg0_1, add, lift_fresh_copy]);  true_graph_0 = false_graph_0 = arg0_1 = add = lift_fresh_copy = None
            getitem: "f32[6]" = conditional[0];  conditional = None
            return (getitem,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[6]", arg1_1: "f32[6]", arg2_1: "f32[]"):
                        add: "f32[6]" = torch.ops.aten.add.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
                add_1: "f32[6]" = torch.ops.aten.add.Tensor(add, arg2_1);  add = arg2_1 = None
                return (add_1,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[6]", arg1_1: "f32[6]", arg2_1: "f32[]"):
                        sub: "f32[6]" = torch.ops.aten.sub.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
                sub_1: "f32[6]" = torch.ops.aten.sub.Tensor(sub, arg2_1);  sub = arg2_1 = None
                return (sub_1,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=, arg=TensorArgument(name='_lifted_tensor_constant0'), target='_lifted_tensor_constant0', persistent=None), InputSpec(kind=, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=, arg=TensorArgument(name='getitem'), target=None)])
Range constraints: {}

cond_closed_over_variable

注意

标签: torch.cond, python.closure

支持级别:支持

原始源代码:

import torch

from functorch.experimental.control_flow import cond


class CondClosedOverVariable(torch.nn.Module):
    """
    torch.cond() 支持分支闭包任意变量。
    """

    def forward(self, pred, x):
        def true_fn(val):
            return x * 2

        def false_fn(val):
            return x - 2

        return cond(pred, true_fn, false_fn, [x + 1])

结果:

导出的程序:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "b8[]", arg1_1: "f32[3, 2]"):
                真图_0 = self.真图_0
            假图_0 = self.假图_0
            条件 = torch.ops.higher_order.cond(arg0_1, 真图_0, 假图_0, [arg1_1]);  arg0_1 = 真图_0 = 假图_0 = arg1_1 = None
            获取项: "f32[3, 2]" = 条件[0];  条件 = None
            return (获取项,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[3, 2]"):
                        乘法: "f32[3, 2]" = torch.ops.aten.mul.Tensor(arg0_1, 2);  arg0_1 = None
                return (乘法,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[3, 2]"):
                        减法: "f32[3, 2]" = torch.ops.aten.sub.Tensor(arg0_1, 2);  arg0_1 = None
                return (减法,)

图签名: ExportGraphSignature(输入规格=[输入规格(种类=<输入种类.用户输入: 1>, 参数=张量参数(名称='arg0_1'), 目标=None, 持久=None), 输入规格(种类=<输入种类.用户输入: 1>, 参数=张量参数(名称='arg1_1'), 目标=None, 持久=None)], 输出规格=[输出规格(种类=<输出种类.用户输出: 1>, 参数=张量参数(名称='获取项'), 目标=None)])
范围约束: {}

条件操作数

注意

标签: torch.cond, torch.dynamic-shape

支持级别:支持

原始源代码:

import torch

from torch.export import Dim
from functorch.experimental.control_flow import cond

x = torch.randn(3, 2)
y = torch.ones(2)
dim0_x = Dim("dim0_x")

class CondOperands(torch.nn.Module):
    """
    传递给 cond() 的操作数必须是:
    - 张量列表
    - 匹配 `true_fn` 和 `false_fn` 的参数

    注意:如果 `pred` 在批量大小 < 2 的维度上进行测试,它将被专门化。
    """

    def __init__(self):
        super().__init__()

    def forward(self, x, y):
        def true_fn(x, y):
            return x + y

        def false_fn(x, y):
            return x - y

        return cond(x.shape[0] > 2, true_fn, false_fn, [x, y])

结果:

导出的程序:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[s0, 2]", arg1_1: "f32[2]"):
                sym_size_int: "Sym(s0)" = torch.ops.aten.sym_size.int(arg0_1, 0)
            gt: "Sym(s0 > 2)" = sym_size_int > 2;  sym_size_int = None
            true_graph_0 = self.true_graph_0
            false_graph_0 = self.false_graph_0
            conditional = torch.ops.higher_order.cond(gt, true_graph_0, false_graph_0, [arg0_1, arg1_1]);  gt = true_graph_0 = false_graph_0 = arg0_1 = arg1_1 = None
            getitem: "f32[s0, 2]" = conditional[0];  conditional = None
            return (getitem,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[s0, 2]", arg1_1: "f32[2]"):
                        add: "f32[s0, 2]" = torch.ops.aten.add.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
                return (add,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[s0, 2]", arg1_1: "f32[2]"):
                        sub: "f32[s0, 2]" = torch.ops.aten.sub.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
                return (sub,)

图的签名: ExportGraphSignature(input_specs=[InputSpec(kind=, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=, arg=TensorArgument(name='getitem'), target=None)])
范围约束: {s0: ValueRanges(lower=2, upper=oo, is_bool=False)}

条件谓词

注意

标签: torch.cond, torch.dynamic-shape

支持级别:支持

原始源代码:

import torch

from functorch.experimental.control_flow import cond


class CondPredicate(torch.nn.Module):
    """
    传递给 cond() 的条件语句(也称为谓词)必须是以下之一:
      - 包含单个元素的 torch.Tensor
      - 布尔表达式

    注意:如果 `pred` 在批量大小 < 2 的维度上进行测试,它将被专门化。
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        pred = x.dim() > 2 and x.shape[2] > 10

        return cond(pred, lambda x: x.cos(), lambda y: y.sin(), [x])

结果:

导出的程序:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[6, 4, 3]"):
                真图_0 = self.true_graph_0
            假图_0 = self.false_graph_0
            条件 = torch.ops.higher_order.cond(False, true_graph_0, false_graph_0, [arg0_1]);  真图_0 = 假图_0 = arg0_1 = None
            获取项: "f32[6, 4, 3]" = 条件[0];  条件 = None
            return (获取项,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[6, 4, 3]"):
                        余弦: "f32[6, 4, 3]" = torch.ops.aten.cos.default(arg0_1);  arg0_1 = None
                return (余弦,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[6, 4, 3]"):
                        正弦: "f32[6, 4, 3]" = torch.ops.aten.sin.default(arg0_1);  arg0_1 = None
                return (正弦,)

图签名: ExportGraphSignature(输入规格=[输入规格(种类=<输入种类.用户输入: 1>, 参数=张量参数(名称='arg0_1'), 目标=None, 持久=None)], 输出规格=[输出规格(种类=<输出种类.用户输出: 1>, 参数=张量参数(名称='获取项'), 目标=None)])
范围约束: {}

constrain_as_size_example

注意

标签: torch.dynamic-value, torch.escape-hatch

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "i64[]"):
                _local_scalar_dense: "Sym(u4)" = torch.ops.aten._local_scalar_dense.default(arg0_1);  arg0_1 = None

                ge: "Sym(u4 >= 0)" = _local_scalar_dense >= 0
            scalar_tensor: "f32[]" = torch.ops.aten.scalar_tensor.default(ge);  ge = None
            _assert_async = torch.ops.aten._assert_async.msg(scalar_tensor, '_local_scalar_dense 超出了内联约束 [0, 5]。');  scalar_tensor = None
            le: "Sym(u4 <= 5)" = _local_scalar_dense <= 5
            scalar_tensor_1: "f32[]" = torch.ops.aten.scalar_tensor.default(le);  le = None
            _assert_async_1 = torch.ops.aten._assert_async.msg(scalar_tensor_1, '_local_scalar_dense 超出了内联约束 [0, 5]。');  scalar_tensor_1 = None

                sym_constrain_range_for_size = torch.ops.aten.sym_constrain_range_for_size.default(_local_scalar_dense, min = 0, max = 5)

                ones: "f32[u4, 5]" = torch.ops.aten.ones.default([_local_scalar_dense, 5], device = device(type='cpu'), pin_memory = False);  _local_scalar_dense = None
            return (ones,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='ones'), target=None)])
Range constraints: {u0: ValueRanges(lower=0, upper=5, is_bool=False), u1: ValueRanges(lower=0, upper=5, is_bool=False), u4: ValueRanges(lower=0, upper=5, is_bool=False)}

constrain_as_value_example

注意

标签: torch.dynamic-value, torch.escape-hatch

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "i64[]", arg1_1: "f32[5, 5]"):
                _local_scalar_dense: "Sym(u4)" = torch.ops.aten._local_scalar_dense.default(arg0_1);  arg0_1 = None

                ge: "Sym(u4 >= 0)" = _local_scalar_dense >= 0
            scalar_tensor: "f32[]" = torch.ops.aten.scalar_tensor.default(ge);  ge = None
            _assert_async = torch.ops.aten._assert_async.msg(scalar_tensor, '_local_scalar_dense 超出了内联约束 [0, 5]。');  scalar_tensor = None
            le: "Sym(u4 <= 5)" = _local_scalar_dense <= 5
            scalar_tensor_1: "f32[]" = torch.ops.aten.scalar_tensor.default(le);  le = None
            _assert_async_1 = torch.ops.aten._assert_async.msg(scalar_tensor_1, '_local_scalar_dense 超出了内联约束 [0, 5]。');  scalar_tensor_1 = None

                sym_constrain_range = torch.ops.aten.sym_constrain_range.default(_local_scalar_dense, min = 0, max = 5);  _local_scalar_dense = None

                sin: "f32[5, 5]" = torch.ops.aten.sin.default(arg1_1);  arg1_1 = None
            return (sin,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='sin'), target=None)])
Range constraints: {u0: ValueRanges(lower=0, upper=5, is_bool=False), u1: ValueRanges(lower=0, upper=5, is_bool=False), u4: ValueRanges(lower=0, upper=5, is_bool=False)}

装饰器

注意

标签:

支持级别:支持

原始源代码:

```html
import functools

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1: "f32[3, 2]"):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None

                add_1: "f32[3, 2]" = torch.ops.aten.add.Tensor(add, 1);  add = None
            return (add_1,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_1'), target=None)])
Range constraints: {}

字典

注意

标签: python.数据结构

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1: "i64[]"):
                mul: "f32[3, 2]" = torch.ops.aten.mul.Tensor(arg0_1, arg0_1);  arg0_1 = None

                mul_1: "f32[3, 2]" = torch.ops.aten.mul.Tensor(arg1_1, mul);  arg1_1 = mul = None
            return (mul_1,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='mul_1'), target=None)])
Range constraints: {}

dynamic_shape_assert

注意

标签: python.assert

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
            return (arg0_1,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None)])
Range constraints: {}

动态形状构造器

注意

标签: torch.dynamic-shape

支持级别:支持

原始源代码:

```html
import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
                ones: "f32[6]" = torch.ops.aten.ones.default([6], device = device(type='cpu'), pin_memory = False)
            return (ones,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='ones'), target=None)])
Range constraints: {}

dynamic_shape_if_guard

注意

标签: torch.dynamic-shape, python.control-flow

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2, 2]"):
                cos: "f32[3, 2, 2]" = torch.ops.aten.cos.default(arg0_1);  arg0_1 = None
            return (cos,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='cos'), target=None)])
范围约束: {}

dynamic_shape_map

注意

标签: torch.dynamic-shape, torch.map

支持级别:支持

原始源代码:

import torch

from functorch.experimental.control_flow import map


class DynamicShapeMap(torch.nn.Module):
    """
    functorch map() 将一个函数映射到第一个张量维度上。
    """

    def __init__(self):
        super().__init__()

    def forward(self, xs, y):
        def body(x, y):
            return x + y

        return map(body, xs, y)

结果:

导出的程序:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1: "f32[2]"):
                body_graph_0 = self.body_graph_0
            map_impl = torch.ops.higher_order.map_impl(body_graph_0, [arg0_1], [arg1_1]);  body_graph_0 = arg0_1 = arg1_1 = None
            getitem: "f32[3, 2]" = map_impl[0];  map_impl = None
            return (getitem,)

        class (torch.nn.Module):
            def forward(self, arg0_1: "f32[2]", arg1_1: "f32[2]"):
                        add: "f32[2]" = torch.ops.aten.add.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
                return (add,)

图的签名: ExportGraphSignature(input_specs=[InputSpec(kind=, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=, arg=TensorArgument(name='getitem'), target=None)])
范围约束: {}

动态形状切片

注意

标签: torch.dynamic-shape

支持级别:支持

原始源代码:

```html
import torch

结果:

导出的程序:
     GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
                slice_1: "f32[1, 2]" = torch.ops.aten.slice.Tensor(arg0_1, 0, 0, 1);  arg0_1 = None
            slice_2: "f32[1, 1]" = torch.ops.aten.slice.Tensor(slice_1, 1, 1, 9223372036854775807, 2);  slice_1 = None
            return (slice_2,)

 签名: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='slice_2'), target=None)])
范围 约束: {}

dynamic_shape_view

注意

标签: torch.dynamic-shape

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[10, 10]"):
                view: "f32[10, 2, 5]" = torch.ops.aten.view.default(arg0_1, [10, 2, 5]);  arg0_1 = None

                permute: "f32[10, 5, 2]" = torch.ops.aten.permute.default(view, [0, 2, 1]);  view = None
            return (permute,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='permute'), target=None)])
Range constraints: {}

fn_with_kwargs

注意

标签: python.数据结构

支持级别:支持

原始源代码:

import torch

结果:

```html
ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[4]", arg1_1: "f32[4]", arg2_1: "f32[4]", arg3_1: "f32[4]", arg4_1: "f32[4]", arg5_1: "f32[4]", arg6_1: "f32[4]", arg7_1: "f32[4]"):
                mul: "f32[4]" = torch.ops.aten.mul.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
            mul_1: "f32[4]" = torch.ops.aten.mul.Tensor(mul, arg2_1);  mul = arg2_1 = None

                mul_2: "f32[4]" = torch.ops.aten.mul.Tensor(mul_1, arg3_1);  mul_1 = arg3_1 = None
            mul_3: "f32[4]" = torch.ops.aten.mul.Tensor(mul_2, arg4_1);  mul_2 = arg4_1 = None

                mul_4: "f32[4]" = torch.ops.aten.mul.Tensor(mul_3, arg5_1);  mul_3 = arg5_1 = None

                mul_5: "f32[4]" = torch.ops.aten.mul.Tensor(mul_4, arg6_1);  mul_4 = arg6_1 = None
            mul_6: "f32[4]" = torch.ops.aten.mul.Tensor(mul_5, arg7_1);  mul_5 = arg7_1 = None
            return (mul_6,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg2_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg3_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg4_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg5_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg6_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg7_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind<span class

list_contains

注意

标签: torch.dynamic-shape, python.data-structure, python.assert

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, arg0_1);  arg0_1 = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}

列表解包

注意

标签: python.数据结构, python.控制流程

支持级别:支持

原始源代码:

from typing import List

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1: "i64[]", arg2_1: "i64[]"):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg2_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}

嵌套函数

注意

标签: python.closure

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1: "f32[2]"):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, arg1_1)

                sub: "f32[3, 2]" = torch.ops.aten.sub.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None

                add_1: "f32[3, 2]" = torch.ops.aten.add.Tensor(add, 1);  add = None

                mul: "f32[3, 2]" = torch.ops.aten.mul.Tensor(add_1, add_1);  add_1 = None
            add_2: "f32[3, 2]" = torch.ops.aten.add.Tensor(mul, sub);  mul = sub = None
            return (add_2,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_2'), target=None)])
Range constraints: {}

null_context_manager

注意

标签: python.context-manager

支持级别:支持

原始源代码:

import contextlib

import torch

结果:

导出的程序:
     GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
                sin: "f32[3, 2]" = torch.ops.aten.sin.default(arg0_1)
            cos: "f32[3, 2]" = torch.ops.aten.cos.default(arg0_1);  arg0_1 = None
            add: "f32[3, 2]" = torch.ops.aten.add.Tensor(sin, cos);  sin = cos = None
            return (add,)

 签名: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
范围 约束: {}

pytree_flatten

注意

标签:

支持级别:支持

原始源代码:

import torch

from torch.utils import _pytree as pytree


class PytreeFlatten(torch.nn.Module):
    """
    Pytree 来自 PyTorch 可以被 TorchDynamo 捕获。
    """
    def __init__(self):
        super().__init__()

    def forward(self, x):
        y, spec = pytree.tree_flatten(x)
        return y[0] + 1

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1: "f32[3, 2]"):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 1);  arg0_1 = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
范围约束: {}

标量输出

注意

标签: torch.dynamic-shape

支持级别:支持

原始源代码:

import torch

from torch.export import Dim

x = torch.ones(3, 2)
dim1_x = Dim("dim1_x")

class ScalarOutput(torch.nn.Module):
    """
    除了张量输出外,还支持从图中返回标量值。符号形状被捕获,并且秩被专门化。
    """
    def __init__(self):
        super().__init__()

    def forward(self, x):
        return x.shape[1] + 1

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, s0]"):
            # 没有找到以下节点的堆栈跟踪
            sym_size_int: "Sym(s0)" = torch.ops.aten.sym_size.int(arg0_1, 1);  arg0_1 = None
            add: "Sym(s0 + 1)" = sym_size_int + 1;  sym_size_int = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=SymIntArgument(name='add'), target=None)])
Range constraints: {s0: ValueRanges(lower=2, upper=oo, is_bool=False)}

专门属性

注意

标签:

支持级别:支持

原始源代码:

```html
from enum import Enum

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
                mul: "f32[3, 2]" = torch.ops.aten.mul.Tensor(arg0_1, arg0_1);  arg0_1 = None
            add: "f32[3, 2]" = torch.ops.aten.add.Tensor(mul, 4);  mul = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}

静态循环

注意

标签: python.控制流

支持级别:支持

原始源代码:

import torch

结果:

```html
ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 0)
            add_1: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 1)
            add_2: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 2)
            add_3: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 3)
            add_4: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 4)
            add_5: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 5)
            add_6: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 6)
            add_7: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 7)
            add_8: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 8)
            add_9: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 9);  arg0_1 = None
            return (add, add_1, add_2, add_3, add_4, add_5, add_6, add_7, add_8, add_9)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_1'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_2'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_3'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_4'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_5'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_6'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_7'), target=None), OutputSpec(kind=<<span class

static_if

注意

标签: python.控制流

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2, 2]"):
                ones: "f32[1, 1, 1]" = torch.ops.aten.ones.default([1, 1, 1], device = device(type='cpu'), pin_memory = False)
            add: "f32[3, 2, 2]" = torch.ops.aten.add.Tensor(arg0_1, ones);  arg0_1 = ones = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}

tensor_setattr

注意

标签: python.builtin

支持级别:支持

原始源代码:

import torch

结果:

导出的程序:
     GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]", arg1_1):
                add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 4);  arg0_1 = None
            return (add,)

 签名: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=ConstantArgument(value='attr'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
范围 约束: {}

类型反射方法

注意

标签: python.builtin

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 4]"):
                add: "f32[3, 4]" = torch.ops.aten.add.Tensor(arg0_1, 1);  arg0_1 = None
            return (add,)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
范围约束: {}

你可以将上面的示例重写为如下内容:

class TypeReflectionMethodRewrite(torch.nn.Module):
    """
    自定义对象类方法将被内联。
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        return A.func(x)

用户输入突变

注意

标签: torch.mutation

支持级别:支持

原始源代码:

import torch

结果:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[3, 2]"):
                mul: "f32[3, 2]" = torch.ops.aten.mul.Tensor(arg0_1, 2);  arg0_1 = None

                cos: "f32[3, 2]" = torch.ops.aten.cos.default(mul)
            return (mul, cos)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_INPUT_MUTATION: 6>, arg=TensorArgument(name='mul'), target='arg0_1'), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='cos'), target=None)])
Range constraints: {}

尚未支持

dynamic_shape_round

注意

标签: torch.dynamic-shape, python.builtin

支持级别: 尚未支持

原始源代码:

import torch

from torch.export import Dim

x = torch.ones(3, 2)
dim0_x = Dim("dim0_x")

class DynamicShapeRound(torch.nn.Module):
    """
    不支持对动态形状调用round。
    """

    def __init__(self):
        super().__init__()

    def forward(self, x):
        return x[: round(x.shape[0] / 2)]

结果:

AssertionError:

model_attr_mutation

注意

标签: python.对象模型

支持级别: 尚未支持

原始源代码:

```html
import torch

结果:

AssertionError: 在导出期间修改模块属性 attr_list

可选输入

注意

标签: python.对象模型

支持级别: 尚未支持

原始源代码:

import torch

结果:

AssertionError: 在输入中意外发现了一个 

torch_sym_min

注意

标签: torch.operator

支持级别: 尚未支持

原始源代码:

import torch

结果:

不支持: torch.* 操作 返回了 -Tensor 整数 call_function <函数 sym_min  0x7f268479fd30>