speechbrain.lm.arpa 模块
用于处理ARPA格式N-gram模型的工具
期望ARPA格式包含: - 数据头 - 按顺序列出的n-gram计数 - 数据和n-gram部分之间的换行 - 结束
Example
>>> # This example loads an ARPA model and queries it with BackoffNgramLM
>>> import io
>>> from speechbrain.lm.ngram import BackoffNgramLM
>>> # First we'll put an ARPA format model in TextIO and load it:
>>> with io.StringIO() as f:
... print("Anything can be here", file=f)
... print("", file=f)
... print("\\data\\", file=f)
... print("ngram 1=2", file=f)
... print("ngram 2=3", file=f)
... print("", file=f) # Ends data section
... print("\\1-grams:", file=f)
... print("-0.6931 a", file=f)
... print("-0.6931 b 0.", file=f)
... print("", file=f) # Ends unigram section
... print("\\2-grams:", file=f)
... print("-0.6931 a a", file=f)
... print("-0.6931 a b", file=f)
... print("-0.6931 b a", file=f)
... print("", file=f) # Ends bigram section
... print("\\end\\", file=f) # Ends whole file
... _ = f.seek(0)
... num_grams, ngrams, backoffs = read_arpa(f)
>>> # The output of read arpa is already formatted right for the query class:
>>> lm = BackoffNgramLM(ngrams, backoffs)
>>> lm.logprob("a", context = tuple())
-0.6931
>>> # Query that requires a backoff:
>>> lm.logprob("b", context = ("b",))
-0.6931
- Authors
阿库·柔赫 2020
皮埃尔·冠军 2023
摘要
函数:
使用kaldilm将ARPA LM转换为FST。 |
|
从流中读取ARPA格式的N-gram语言模型 |
参考
- speechbrain.lm.arpa.read_arpa(fstream)[source]
从流中读取ARPA格式的N-gram语言模型
- Parameters:
fstream (TextIO) – 文本文件流(通常由 open() 返回)用于读取模型。
- Returns:
dict – 将N-gram的顺序映射到该顺序的ngrams数量。本质上是ARPA格式文件的数据部分。
dict – ARPA文件中的对数概率(第一列)。这是一个三重嵌套的字典。第一层按N-gram顺序(整数)索引。第二层按上下文(令牌元组)索引。第三层按令牌索引,并映射到对数概率。此格式与
speechbrain.lm.ngram.BackoffNGramLM兼容。示例:在ARPA格式中,log(P(fox|a quick red)) = -5.3表示为:-5.3 a quick red fox- 要访问该概率,请使用:
ngrams_by_order[4][('a', 'quick', 'red')]['fox']
dict – ARPA文件中的对数回退权重(最后一列)。这是一个双重嵌套的字典。第一层按N-gram顺序(整数)索引。第二层按回退历史(令牌元组)索引,即概率分布所依赖的上下文。这映射到对数权重。此格式与
speechbrain.lm.ngram.BackoffNGramLM兼容。示例:如果未列出log(P(fox|a quick red)),我们找到log(backoff(a quick red)) = -23.4,在ARPA格式中表示为:a quick red -23.4 - 要在此处访问该值,请使用:
backoffs_by_order[3][('a', 'quick', 'red')]
- Raises:
ValueError – 如果没有找到LM或文件格式错误。
- speechbrain.lm.arpa.arpa_to_fst(words_txt: str | Path, in_arpa: str | Path, out_fst: str | Path, ngram_order: int, disambig_symbol: str = '#0', cache: bool = True)[source]
使用kaldilm将ARPA LM转换为FST。例如,您可以使用speechbrain.lm.train_ngram创建ARPA LM,然后使用此函数将其转换为FST。
值得注意的是,如果fst已经存在于output_dir中, 那么它们将不会再次转换(因此,如果您在任何时候更改了ARPA模型,可能需要手动删除它们)。
- Parameters:
- Raises:
ImportError – 如果未安装kaldilm:
- Return type:
无
Example
>>> from speechbrain.lm.arpa import arpa_to_fst
>>> # Create a small arpa model >>> arpa_file = getfixture('tmpdir').join("bigram.arpa") >>> arpa_file.write( ... "Anything can be here\n" ... + "\n" ... + "\\data\\\n" ... + "ngram 1=3\n" ... + "ngram 2=4\n" ... + "\n" ... + "\\1-grams:\n" ... + "0 <s>\n" ... + "-0.6931 a\n" ... + "-0.6931 b 0.\n" ... + "" # Ends unigram section ... + "\\2-grams:\n" ... + "-0.6931 <s> a\n" ... + "-0.6931 a a\n" ... + "-0.6931 a b\n" ... + "-0.6931 b a\n" ... + "\n" # Ends bigram section ... + "\\end\\\n") # Ends whole file >>> # Create words vocab >>> vocav = getfixture('tmpdir').join("words.txt") >>> vocav.write( ... "a 1\n" ... + "b 2\n" ... + "<s> 3\n" ... + "#0 4") # Ends whole file >>> out = getfixture('tmpdir').join("bigram.txt.fst") >>> arpa_to_fst(vocav, arpa_file, out, 2)