python.控制流¶
dynamic_shape_if_guard¶
原始源代码:
import torch
结果:
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: "f32[3, 2, 2]"):
cos: "f32[3, 2, 2]" = torch.ops.aten.cos.default(arg0_1); arg0_1 = None
return (cos,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='cos'), target=None)])
范围约束: {}
列表解包¶
原始源代码:
from typing import List
import torch
结果:
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: "f32[3, 2]", arg1_1: "i64[]", arg2_1: "i64[]"):
add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, arg1_1); arg0_1 = arg1_1 = None
return (add,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg2_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}
静态循环¶
原始源代码:
import torch
结果:
```html
ExportedProgram: class GraphModule(torch.nn.Module): def forward(self, arg0_1: "f32[3, 2]"): add: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 0) add_1: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 1) add_2: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 2) add_3: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 3) add_4: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 4) add_5: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 5) add_6: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 6) add_7: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 7) add_8: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 8) add_9: "f32[3, 2]" = torch.ops.aten.add.Tensor(arg0_1, 9); arg0_1 = None return (add, add_1, add_2, add_3, add_4, add_5, add_6, add_7, add_8, add_9) Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_1'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_2'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_3'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_4'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_5'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_6'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add_7'), target=None), OutputSpec(kind=<<span class
static_if¶
原始源代码:
import torch
结果:
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: "f32[3, 2, 2]"):
ones: "f32[1, 1, 1]" = torch.ops.aten.ones.default([1, 1, 1], device = device(type='cpu'), pin_memory = False)
add: "f32[3, 2, 2]" = torch.ops.aten.add.Tensor(arg0_1, ones); arg0_1 = ones = None
return (add,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}