• Tutorials >
  • Sequence Models and Long Short-Term Memory Networks
Shortcuts

序列模型和长短期记忆网络

创建于:2017年4月8日 | 最后更新:2022年1月7日 | 最后验证:未验证

到目前为止,我们已经看到了各种前馈网络。也就是说,网络根本不维护任何状态。这可能不是我们想要的行为。序列模型是自然语言处理的核心:它们是输入之间存在某种时间依赖性的模型。序列模型的经典例子是用于词性标注的隐马尔可夫模型。另一个例子是条件随机场。

循环神经网络是一种保持某种状态的网络。例如,它的输出可以用作下一个输入的一部分,这样信息就可以随着网络在序列上的传递而传播。在LSTM的情况下,对于序列中的每个元素,都有一个对应的隐藏状态 \(h_t\),原则上它可以包含序列中任意较早点的信息。我们可以使用隐藏状态来预测语言模型中的单词、词性标签以及其他无数事物。

Pytorch中的LSTMs

在进入示例之前,请注意几点。Pytorch的LSTM期望其所有输入都是3D张量。这些张量的轴语义很重要。第一个轴是序列本身,第二个轴索引小批量中的实例,第三个轴索引输入的元素。我们还没有讨论小批量处理,所以让我们暂时忽略这一点,并假设我们总是在第二个轴上只有一个维度。如果我们想在句子“The cow jumped”上运行序列模型,我们的输入应该看起来像

\[\begin{bmatrix} \overbrace{q_\text{The}}^\text{row vector} \\ q_\text{cow} \\ q_\text{jumped} \end{bmatrix}\]

除了记住还有一个大小为1的额外第二维度。

此外,您可以逐个遍历序列,在这种情况下,第一个轴的大小也将为1。

让我们看一个简单的例子。

# Author: Robert Guthrie

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

torch.manual_seed(1)
<torch._C.Generator object at 0x7f1110b25b30>
lstm = nn.LSTM(3, 3)  # Input dim is 3, output dim is 3
inputs = [torch.randn(1, 3) for _ in range(5)]  # make a sequence of length 5

# initialize the hidden state.
hidden = (torch.randn(1, 1, 3),
          torch.randn(1, 1, 3))
for i in inputs:
    # Step through the sequence one element at a time.
    # after each step, hidden contains the hidden state.
    out, hidden = lstm(i.view(1, 1, -1), hidden)

# alternatively, we can do the entire sequence all at once.
# the first value returned by LSTM is all of the hidden states throughout
# the sequence. the second is just the most recent hidden state
# (compare the last slice of "out" with "hidden" below, they are the same)
# The reason for this is that:
# "out" will give you access to all hidden states in the sequence
# "hidden" will allow you to continue the sequence and backpropagate,
# by passing it as an argument  to the lstm at a later time
# Add the extra 2nd dimension
inputs = torch.cat(inputs).view(len(inputs), 1, -1)
hidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3))  # clean out hidden state
out, hidden = lstm(inputs, hidden)
print(out)
print(hidden)
tensor([[[-0.0187,  0.1713, -0.2944]],

        [[-0.3521,  0.1026, -0.2971]],

        [[-0.3191,  0.0781, -0.1957]],

        [[-0.1634,  0.0941, -0.1637]],

        [[-0.3368,  0.0959, -0.0538]]], grad_fn=<MkldnnRnnLayerBackward0>)
(tensor([[[-0.3368,  0.0959, -0.0538]]], grad_fn=<StackBackward0>), tensor([[[-0.9825,  0.4715, -0.0633]]], grad_fn=<StackBackward0>))

示例:用于词性标注的LSTM

在本节中,我们将使用LSTM来获取词性标签。我们不会使用Viterbi或前向-后向算法等,但作为对读者的一个(具有挑战性的)练习,请思考在了解了当前内容后,如何使用Viterbi算法。在这个例子中,我们还提到了嵌入。如果你不熟悉嵌入,你可以在这里了解更多。

模型如下:让我们的输入句子为 \(w_1, \dots, w_M\),其中 \(w_i \in V\),我们的词汇表。同时,让 \(T\) 成为我们的标签集,\(y_i\) 是单词 \(w_i\) 的标签。 用 \(\hat{y}_i\) 表示我们对单词 \(w_i\) 标签的预测。

这是一个结构预测模型,其中我们的输出是一个序列 \(\hat{y}_1, \dots, \hat{y}_M\),其中 \(\hat{y}_i \in T\)

为了进行预测,将LSTM应用于句子。将时间步\(i\)的隐藏状态表示为\(h_i\)。同时,为每个标签分配一个唯一的索引(就像我们在词嵌入部分有word_to_ix一样)。然后我们对\(\hat{y}_i\)的预测规则是

\[\hat{y}_i = \text{argmax}_j \ (\log \text{Softmax}(Ah_i + b))_j \]

也就是说,取隐藏状态的仿射映射的对数softmax,预测的标签是这个向量中具有最大值的标签。注意,这立即意味着\(A\)的目标空间的维度是\(|T|\)

准备数据:

def prepare_sequence(seq, to_ix):
    idxs = [to_ix[w] for w in seq]
    return torch.tensor(idxs, dtype=torch.long)


training_data = [
    # Tags are: DET - determiner; NN - noun; V - verb
    # For example, the word "The" is a determiner
    ("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
    ("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
word_to_ix = {}
# For each words-list (sentence) and tags-list in each tuple of training_data
for sent, tags in training_data:
    for word in sent:
        if word not in word_to_ix:  # word has not been assigned an index yet
            word_to_ix[word] = len(word_to_ix)  # Assign each word with a unique index
print(word_to_ix)
tag_to_ix = {"DET": 0, "NN": 1, "V": 2}  # Assign each tag with a unique index

# These will usually be more like 32 or 64 dimensional.
# We will keep them small, so we can see how the weights change as we train.
EMBEDDING_DIM = 6
HIDDEN_DIM = 6
{'The': 0, 'dog': 1, 'ate': 2, 'the': 3, 'apple': 4, 'Everybody': 5, 'read': 6, 'that': 7, 'book': 8}

创建模型:

class LSTMTagger(nn.Module):

    def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
        super(LSTMTagger, self).__init__()
        self.hidden_dim = hidden_dim

        self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)

        # The LSTM takes word embeddings as inputs, and outputs hidden states
        # with dimensionality hidden_dim.
        self.lstm = nn.LSTM(embedding_dim, hidden_dim)

        # The linear layer that maps from hidden state space to tag space
        self.hidden2tag = nn.Linear(hidden_dim, tagset_size)

    def forward(self, sentence):
        embeds = self.word_embeddings(sentence)
        lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))
        tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
        tag_scores = F.log_softmax(tag_space, dim=1)
        return tag_scores

训练模型:

model = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(tag_to_ix))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

# See what the scores are before training
# Note that element i,j of the output is the score for tag j for word i.
# Here we don't need to train, so the code is wrapped in torch.no_grad()
with torch.no_grad():
    inputs = prepare_sequence(training_data[0][0], word_to_ix)
    tag_scores = model(inputs)
    print(tag_scores)

for epoch in range(300):  # again, normally you would NOT do 300 epochs, it is toy data
    for sentence, tags in training_data:
        # Step 1. Remember that Pytorch accumulates gradients.
        # We need to clear them out before each instance
        model.zero_grad()

        # Step 2. Get our inputs ready for the network, that is, turn them into
        # Tensors of word indices.
        sentence_in = prepare_sequence(sentence, word_to_ix)
        targets = prepare_sequence(tags, tag_to_ix)

        # Step 3. Run our forward pass.
        tag_scores = model(sentence_in)

        # Step 4. Compute the loss, gradients, and update the parameters by
        #  calling optimizer.step()
        loss = loss_function(tag_scores, targets)
        loss.backward()
        optimizer.step()

# See what the scores are after training
with torch.no_grad():
    inputs = prepare_sequence(training_data[0][0], word_to_ix)
    tag_scores = model(inputs)

    # The sentence is "the dog ate the apple".  i,j corresponds to score for tag j
    # for word i. The predicted tag is the maximum scoring tag.
    # Here, we can see the predicted sequence below is 0 1 2 0 1
    # since 0 is index of the maximum value of row 1,
    # 1 is the index of maximum value of row 2, etc.
    # Which is DET NOUN VERB DET NOUN, the correct sequence!
    print(tag_scores)
tensor([[-1.1389, -1.2024, -0.9693],
        [-1.1065, -1.2200, -0.9834],
        [-1.1286, -1.2093, -0.9726],
        [-1.1190, -1.1960, -0.9916],
        [-1.0137, -1.2642, -1.0366]])
tensor([[-0.0462, -4.0106, -3.6096],
        [-4.8205, -0.0286, -3.9045],
        [-3.7876, -4.1355, -0.0394],
        [-0.0185, -4.7874, -4.6013],
        [-5.7881, -0.0186, -4.1778]])

练习:使用字符级特征增强LSTM词性标注器

在上面的例子中,每个单词都有一个嵌入,作为我们序列模型的输入。让我们通过从单词的字符中派生的表示来增强单词嵌入。我们预计这将显著帮助,因为像词缀这样的字符级信息对词性有很大影响。例如,带有-ly词缀的单词在英语中几乎总是被标记为副词。

要做到这一点,让\(c_w\)表示单词\(w\)的字符级表示。让\(x_w\)像以前一样表示词嵌入。然后我们序列模型的输入是\(x_w\)\(c_w\)的连接。因此,如果\(x_w\)的维度为5,\(c_w\)的维度为3,那么我们的LSTM应该接受维度为8的输入。

为了获得字符级别的表示,对单词的字符进行LSTM处理,并让\(c_w\)成为这个LSTM的最终隐藏状态。提示:

  • 你的新模型中将会包含两个LSTM。 一个是原始的,输出POS标签分数,另一个是新的,输出每个单词的字符级表示。

  • 要对字符进行序列建模,您需要嵌入字符。 字符嵌入将是字符LSTM的输入。

脚本的总运行时间: ( 0 分钟 0.851 秒)

Gallery generated by Sphinx-Gallery

优云智算