speechbrain.k2_integration.losses 模块

该文件包含用于k2训练的损失函数。目前,我们仅支持CTC损失。

Authors:
  • 皮埃尔·冠军 2023

  • 赵泽宇 2023

  • 乔治奥斯·卡拉卡西迪斯 2023

摘要

函数:

ctc_k2

使用k2实现的CTC损失。

参考

speechbrain.k2_integration.losses.ctc_k2(log_probs, input_lens, graph_compiler, texts, reduction='mean', beam_size=10, use_double_scores=True, is_training=True)[source]

使用k2实现的CTC损失。请确保k2已正确安装。 请注意,在此实现中,空白索引必须为0。

Parameters:
  • log_probs (torch.Tensor) – 形状为 (batch, time, num_classes) 的对数概率。

  • input_lens (torch.Tensor) – 每个话语的长度。

  • graph_compiler (k2.Fsa) – 解码图。

  • 文本 (列表[str]) – 文本列表。

  • reduction (str) – 应用于输出的缩减方式。'mean'、'sum'、'none'。 参见 k2.ctc_loss 中的 'mean'、'sum'、'none'。

  • beam_size (int) – 束大小。

  • use_double_scores (bool) – 如果为真,则使用双精度分数。

  • is_training (bool) – 如果为真,返回的损失需要梯度。

Returns:

loss – CTC损失。

Return type:

torch.Tensor

Example

>>> import torch
>>> from speechbrain.k2_integration.losses import ctc_k2
>>> from speechbrain.k2_integration.graph_compiler import CtcGraphCompiler
>>> from speechbrain.k2_integration.lexicon import Lexicon
>>> from speechbrain.k2_integration.prepare_lang import prepare_lang
>>> # Create a random batch of log-probs
>>> batch_size = 4
>>> log_probs = torch.randn(batch_size, 100, 30)
>>> log_probs.requires_grad = True
>>> # Assume all utterances have the same length so no padding was needed.
>>> input_lens = torch.ones(batch_size)
>>> # Create a small lexicon containing only two words and write it to a file.
>>> lang_tmpdir = getfixture('tmpdir')
>>> lexicon_sample = "hello h e l l o\nworld w o r l d\n<UNK> <unk>"
>>> lexicon_file = lang_tmpdir.join("lexicon.txt")
>>> lexicon_file.write(lexicon_sample)
>>> # Create a lang directory with the lexicon and L.pt, L_inv.pt, L_disambig.pt
>>> prepare_lang(lang_tmpdir)
>>> # Create a lexicon object
>>> lexicon = Lexicon(lang_tmpdir)
>>> # Create a random decoding graph
>>> graph = CtcGraphCompiler(
...     lexicon,
...     log_probs.device,
... )
>>> # Create a random batch of texts
>>> texts = ["hello world", "world hello", "hello", "world"]
>>> # Compute the loss
>>> loss = ctc_k2(
...     log_probs=log_probs,
...     input_lens=input_lens,
...     graph_compiler=graph,
...     texts=texts,
...     reduction="mean",
...     beam_size=10,
...     use_double_scores=True,
...     is_training=True,
... )