准备层丢弃¶
- torchtune.modules.prepare_layer_dropout(layers: Union[ModuleList, Iterable[Module]], prob_max: float = 0.0, prob_layer_scale: Optional[ScaleType] = ScaleType.UNIFORM, layers_str: Optional[str] = None, disable_on_eval: Optional[bool] = True) None[source]¶
通过将每个层用ModuleLayerDropoutWrapper包装,为模型的层准备层丢弃。 此函数接收一个层列表、丢弃层的最大概率、层丢弃概率的缩放类型、指定应用丢弃的层的字符串, 以及一个布尔值,指示在评估期间是否禁用丢弃。然后,它将模型的每个层就地包装在 ModuleLayerDropoutWrapper中,该包装器对输入张量应用层丢弃。
- Parameters:
layers (Union[torch.nn.ModuleList, Iterable[torch.nn.Module]]) – 用于层丢弃的层列表。
prob_max (float) – 丢弃层的最大概率。默认为0.0。
prob_layer_scale (可选[ScaleType]) – 跨层的dropout概率的缩放类型。默认为 ScaleType.UNIFORM。
layers_str (可选[str]) – 一个字符串,指定要应用dropout的层。默认为None,表示应用于所有层。
disable_on_eval (可选[bool]) – 是否在评估期间禁用dropout。默认为True。
- Returns:
无
示例
>>> import torch >>> from torch import nn >>> # Define a simple model >>> class MyModel(nn.Module): ... def __init__(self): ... super().__init__() ... self.layers = nn.ModuleList([ ... nn.Linear(5, 3), ... nn.Linear(3, 2), ... nn.Linear(2, 1), ... nn.Linear(1, 2), ... nn.Linear(2, 3), ... ]) ... ... def forward(self, x): ... for layer in self.layers: ... x = layer(x) ... return x >>> model = MyModel() >>> # Apply layer dropout uniformly to all layers >>> prepare_layer_dropout(model.layers, prob_max=0.2, prob_layer_scale=ScaleType.UNIFORM) >>> # Apply layer dropout every other layer, as described in LayerDrop paper (Fan et al., https://arxiv.org/abs/1909.11556v1) >>> prepare_layer_dropout(model.layers, prob_max=0.2, prob_layer_scale=ScaleType.UNIFORM, layers_str="::2") >>> # Apply layer dropout that increases linearly across layers, as described in Progressive Layer Dropout paper (Zhang et al., https://arxiv.org/abs/2010.13369) >>> prepare_layer_dropout(model.layers, prob_max=0.2, prob_layer_scale=ScaleType.LINEAR) >>> # Apply layer dropout that increases exponentially across layers, as described in LayerSkip paper (Elhoushi et al., https://arxiv.org/abs/2404.16710) >>> prepare_layer_dropout(model.layers, prob_max=0.2, prob_layer_scale=ScaleType.EXP)