{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# For tips on running notebooks in Google Colab, see\n# https://codelin.vip/beginner/colab\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NLP From Scratch: Translation with a Sequence to Sequence Network and Attention\n===============================================================================\n\n**Author**: [Sean Robertson](https://github.com/spro)\n\nThis tutorials is part of a three-part series:\n\n- [NLP From Scratch: Classifying Names with a Character-Level\n RNN](https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html)\n- [NLP From Scratch: Generating Names with a Character-Level\n RNN](https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html)\n- [NLP From Scratch: Translation with a Sequence to Sequence Network\n and\n Attention](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html)\n\nThis is the third and final tutorial on doing **NLP From Scratch**,\nwhere we write our own classes and functions to preprocess the data to\ndo our NLP modeling tasks.\n\nIn this project we will be teaching a neural network to translate from\nFrench to English.\n\n``` {.sh}\n[KEY: > input, = target, < output]\n\n> il est en train de peindre un tableau .\n= he is painting a picture .\n< he is painting a picture .\n\n> pourquoi ne pas essayer ce vin delicieux ?\n= why not try that delicious wine ?\n< why not try that delicious wine ?\n\n> elle n est pas poete mais romanciere .\n= she is not a poet but a novelist .\n< she not not a poet but a novelist .\n\n> vous etes trop maigre .\n= you re too skinny .\n< you re all alone .\n```\n\n\\... to varying degrees of success.\n\nThis is made possible by the simple but powerful idea of the [sequence\nto sequence network](https://arxiv.org/abs/1409.3215), in which two\nrecurrent neural networks work together to transform one sequence to\nanother. An encoder network condenses an input sequence into a vector,\nand a decoder network unfolds that vector into a new sequence.\n\n![](https://pytorch.org/tutorials/_static/img/seq-seq-images/seq2seq.png)\n\nTo improve upon this model we\\'ll use an [attention\nmechanism](https://arxiv.org/abs/1409.0473), which lets the decoder\nlearn to focus over a specific range of the input sequence.\n\n**Recommended Reading:**\n\nI assume you have at least installed PyTorch, know Python, and\nunderstand Tensors:\n\n- For installation instructions\n- `/beginner/deep_learning_60min_blitz`{.interpreted-text role=\"doc\"}\n to get started with PyTorch in general\n- `/beginner/pytorch_with_examples`{.interpreted-text role=\"doc\"} for\n a wide and deep overview\n- `/beginner/former_torchies_tutorial`{.interpreted-text role=\"doc\"}\n if you are former Lua Torch user\n\nIt would also be useful to know about Sequence to Sequence networks and\nhow they work:\n\n- [Learning Phrase Representations using RNN Encoder-Decoder for\n Statistical Machine Translation](https://arxiv.org/abs/1406.1078)\n- [Sequence to Sequence Learning with Neural\n Networks](https://arxiv.org/abs/1409.3215)\n- [Neural Machine Translation by Jointly Learning to Align and\n Translate](https://arxiv.org/abs/1409.0473)\n- [A Neural Conversational Model](https://arxiv.org/abs/1506.05869)\n\nYou will also find the previous tutorials on\n`/intermediate/char_rnn_classification_tutorial`{.interpreted-text\nrole=\"doc\"} and\n`/intermediate/char_rnn_generation_tutorial`{.interpreted-text\nrole=\"doc\"} helpful as those concepts are very similar to the Encoder\nand Decoder models, respectively.\n\n**Requirements**\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from __future__ import unicode_literals, print_function, division\nfrom io import open\nimport unicodedata\nimport re\nimport random\n\nimport torch\nimport torch.nn as nn\nfrom torch import optim\nimport torch.nn.functional as F\n\nimport numpy as np\nfrom torch.utils.data import TensorDataset, DataLoader, RandomSampler\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Loading data files\n==================\n\nThe data for this project is a set of many thousands of English to\nFrench translation pairs.\n\n[This question on Open Data Stack\nExchange](https://opendata.stackexchange.com/questions/3888/dataset-of-sentences-translated-into-many-languages)\npointed me to the open translation site which has\ndownloads available at - and better\nyet, someone did the extra work of splitting language pairs into\nindividual text files here: \n\nThe English to French pairs are too big to include in the repository, so\ndownload to `data/eng-fra.txt` before continuing. The file is a tab\nseparated list of translation pairs:\n\n``` {.sh}\nI am cold. J'ai froid.\n```\n\n```{=html}\n
NOTE:
\n```\n```{=html}\n
\n```\n```{=html}\n

Download the data fromhereand extract it to the current directory.

\n```\n```{=html}\n
\n```\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Similar to the character encoding used in the character-level RNN\ntutorials, we will be representing each word in a language as a one-hot\nvector, or giant vector of zeros except for a single one (at the index\nof the word). Compared to the dozens of characters that might exist in a\nlanguage, there are many many more words, so the encoding vector is much\nlarger. We will however cheat a bit and trim the data to only use a few\nthousand words per language.\n\n![](https://pytorch.org/tutorials/_static/img/seq-seq-images/word-encoding.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We\\'ll need a unique index per word to use as the inputs and targets of\nthe networks later. To keep track of all this we will use a helper class\ncalled `Lang` which has word \u2192 index (`word2index`) and index \u2192 word\n(`index2word`) dictionaries, as well as a count of each word\n`word2count` which will be used to replace rare words later.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "SOS_token = 0\nEOS_token = 1\n\nclass Lang:\n def __init__(self, name):\n self.name = name\n self.word2index = {}\n self.word2count = {}\n self.index2word = {0: \"SOS\", 1: \"EOS\"}\n self.n_words = 2 # Count SOS and EOS\n\n def addSentence(self, sentence):\n for word in sentence.split(' '):\n self.addWord(word)\n\n def addWord(self, word):\n if word not in self.word2index:\n self.word2index[word] = self.n_words\n self.word2count[word] = 1\n self.index2word[self.n_words] = word\n self.n_words += 1\n else:\n self.word2count[word] += 1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The files are all in Unicode, to simplify we will turn Unicode\ncharacters to ASCII, make everything lowercase, and trim most\npunctuation.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Turn a Unicode string to plain ASCII, thanks to\n# https://stackoverflow.com/a/518232/2809427\ndef unicodeToAscii(s):\n return ''.join(\n c for c in unicodedata.normalize('NFD', s)\n if unicodedata.category(c) != 'Mn'\n )\n\n# Lowercase, trim, and remove non-letter characters\ndef normalizeString(s):\n s = unicodeToAscii(s.lower().strip())\n s = re.sub(r\"([.!?])\", r\" \\1\", s)\n s = re.sub(r\"[^a-zA-Z!?]+\", r\" \", s)\n return s.strip()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To read the data file we will split the file into lines, and then split\nlines into pairs. The files are all English \u2192 Other Language, so if we\nwant to translate from Other Language \u2192 English I added the `reverse`\nflag to reverse the pairs.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def readLangs(lang1, lang2, reverse=False):\n print(\"Reading lines...\")\n\n # Read the file and split into lines\n lines = open('data/%s-%s.txt' % (lang1, lang2), encoding='utf-8').\\\n read().strip().split('\\n')\n\n # Split every line into pairs and normalize\n pairs = [[normalizeString(s) for s in l.split('\\t')] for l in lines]\n\n # Reverse pairs, make Lang instances\n if reverse:\n pairs = [list(reversed(p)) for p in pairs]\n input_lang = Lang(lang2)\n output_lang = Lang(lang1)\n else:\n input_lang = Lang(lang1)\n output_lang = Lang(lang2)\n\n return input_lang, output_lang, pairs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since there are a *lot* of example sentences and we want to train\nsomething quickly, we\\'ll trim the data set to only relatively short and\nsimple sentences. Here the maximum length is 10 words (that includes\nending punctuation) and we\\'re filtering to sentences that translate to\nthe form \\\"I am\\\" or \\\"He is\\\" etc. (accounting for apostrophes replaced\nearlier).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "MAX_LENGTH = 10\n\neng_prefixes = (\n \"i am \", \"i m \",\n \"he is\", \"he s \",\n \"she is\", \"she s \",\n \"you are\", \"you re \",\n \"we are\", \"we re \",\n \"they are\", \"they re \"\n)\n\ndef filterPair(p):\n return len(p[0].split(' ')) < MAX_LENGTH and \\\n len(p[1].split(' ')) < MAX_LENGTH and \\\n p[1].startswith(eng_prefixes)\n\n\ndef filterPairs(pairs):\n return [pair for pair in pairs if filterPair(pair)]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The full process for preparing the data is:\n\n- Read text file and split into lines, split lines into pairs\n- Normalize text, filter by length and content\n- Make word lists from sentences in pairs\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def prepareData(lang1, lang2, reverse=False):\n input_lang, output_lang, pairs = readLangs(lang1, lang2, reverse)\n print(\"Read %s sentence pairs\" % len(pairs))\n pairs = filterPairs(pairs)\n print(\"Trimmed to %s sentence pairs\" % len(pairs))\n print(\"Counting words...\")\n for pair in pairs:\n input_lang.addSentence(pair[0])\n output_lang.addSentence(pair[1])\n print(\"Counted words:\")\n print(input_lang.name, input_lang.n_words)\n print(output_lang.name, output_lang.n_words)\n return input_lang, output_lang, pairs\n\ninput_lang, output_lang, pairs = prepareData('eng', 'fra', True)\nprint(random.choice(pairs))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Seq2Seq Model\n=================\n\nA Recurrent Neural Network, or RNN, is a network that operates on a\nsequence and uses its own output as input for subsequent steps.\n\nA [Sequence to Sequence network](https://arxiv.org/abs/1409.3215), or\nseq2seq network, or [Encoder Decoder\nnetwork](https://arxiv.org/pdf/1406.1078v3.pdf), is a model consisting\nof two RNNs called the encoder and decoder. The encoder reads an input\nsequence and outputs a single vector, and the decoder reads that vector\nto produce an output sequence.\n\n![](https://pytorch.org/tutorials/_static/img/seq-seq-images/seq2seq.png)\n\nUnlike sequence prediction with a single RNN, where every input\ncorresponds to an output, the seq2seq model frees us from sequence\nlength and order, which makes it ideal for translation between two\nlanguages.\n\nConsider the sentence `Je ne suis pas le chat noir` \u2192\n`I am not the black cat`. Most of the words in the input sentence have a\ndirect translation in the output sentence, but are in slightly different\norders, e.g. `chat noir` and `black cat`. Because of the `ne/pas`\nconstruction there is also one more word in the input sentence. It would\nbe difficult to produce a correct translation directly from the sequence\nof input words.\n\nWith a seq2seq model the encoder creates a single vector which, in the\nideal case, encodes the \\\"meaning\\\" of the input sequence into a single\nvector --- a single point in some N dimensional space of sentences.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Encoder\n===========\n\nThe encoder of a seq2seq network is a RNN that outputs some value for\nevery word from the input sentence. For every input word the encoder\noutputs a vector and a hidden state, and uses the hidden state for the\nnext input word.\n\n![](https://pytorch.org/tutorials/_static/img/seq-seq-images/encoder-network.png)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class EncoderRNN(nn.Module):\n def __init__(self, input_size, hidden_size, dropout_p=0.1):\n super(EncoderRNN, self).__init__()\n self.hidden_size = hidden_size\n\n self.embedding = nn.Embedding(input_size, hidden_size)\n self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)\n self.dropout = nn.Dropout(dropout_p)\n\n def forward(self, input):\n embedded = self.dropout(self.embedding(input))\n output, hidden = self.gru(embedded)\n return output, hidden" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The Decoder\n===========\n\nThe decoder is another RNN that takes the encoder output vector(s) and\noutputs a sequence of words to create the translation.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simple Decoder\n==============\n\nIn the simplest seq2seq decoder we use only last output of the encoder.\nThis last output is sometimes called the *context vector* as it encodes\ncontext from the entire sequence. This context vector is used as the\ninitial hidden state of the decoder.\n\nAt every step of decoding, the decoder is given an input token and\nhidden state. The initial input token is the start-of-string ``\ntoken, and the first hidden state is the context vector (the encoder\\'s\nlast hidden state).\n\n![](https://pytorch.org/tutorials/_static/img/seq-seq-images/decoder-network.png)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class DecoderRNN(nn.Module):\n def __init__(self, hidden_size, output_size):\n super(DecoderRNN, self).__init__()\n self.embedding = nn.Embedding(output_size, hidden_size)\n self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)\n self.out = nn.Linear(hidden_size, output_size)\n\n def forward(self, encoder_outputs, encoder_hidden, target_tensor=None):\n batch_size = encoder_outputs.size(0)\n decoder_input = torch.empty(batch_size, 1, dtype=torch.long, device=device).fill_(SOS_token)\n decoder_hidden = encoder_hidden\n decoder_outputs = []\n\n for i in range(MAX_LENGTH):\n decoder_output, decoder_hidden = self.forward_step(decoder_input, decoder_hidden)\n decoder_outputs.append(decoder_output)\n\n if target_tensor is not None:\n # Teacher forcing: Feed the target as the next input\n decoder_input = target_tensor[:, i].unsqueeze(1) # Teacher forcing\n else:\n # Without teacher forcing: use its own predictions as the next input\n _, topi = decoder_output.topk(1)\n decoder_input = topi.squeeze(-1).detach() # detach from history as input\n\n decoder_outputs = torch.cat(decoder_outputs, dim=1)\n decoder_outputs = F.log_softmax(decoder_outputs, dim=-1)\n return decoder_outputs, decoder_hidden, None # We return `None` for consistency in the training loop\n\n def forward_step(self, input, hidden):\n output = self.embedding(input)\n output = F.relu(output)\n output, hidden = self.gru(output, hidden)\n output = self.out(output)\n return output, hidden" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "I encourage you to train and observe the results of this model, but to\nsave space we\\'ll be going straight for the gold and introducing the\nAttention Mechanism.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Attention Decoder\n=================\n\nIf only the context vector is passed between the encoder and decoder,\nthat single vector carries the burden of encoding the entire sentence.\n\nAttention allows the decoder network to \\\"focus\\\" on a different part of\nthe encoder\\'s outputs for every step of the decoder\\'s own outputs.\nFirst we calculate a set of *attention weights*. These will be\nmultiplied by the encoder output vectors to create a weighted\ncombination. The result (called `attn_applied` in the code) should\ncontain information about that specific part of the input sequence, and\nthus help the decoder choose the right output words.\n\n![](https://i.imgur.com/1152PYf.png)\n\nCalculating the attention weights is done with another feed-forward\nlayer `attn`, using the decoder\\'s input and hidden state as inputs.\nBecause there are sentences of all sizes in the training data, to\nactually create and train this layer we have to choose a maximum\nsentence length (input length, for encoder outputs) that it can apply\nto. Sentences of the maximum length will use all the attention weights,\nwhile shorter sentences will only use the first few.\n\n![](https://pytorch.org/tutorials/_static/img/seq-seq-images/attention-decoder-network.png)\n\nBahdanau attention, also known as additive attention, is a commonly used\nattention mechanism in sequence-to-sequence models, particularly in\nneural machine translation tasks. It was introduced by Bahdanau et al.\nin their paper titled [Neural Machine Translation by Jointly Learning to\nAlign and Translate](https://arxiv.org/pdf/1409.0473.pdf). This\nattention mechanism employs a learned alignment model to compute\nattention scores between the encoder and decoder hidden states. It\nutilizes a feed-forward neural network to calculate alignment scores.\n\nHowever, there are alternative attention mechanisms available, such as\nLuong attention, which computes attention scores by taking the dot\nproduct between the decoder hidden state and the encoder hidden states.\nIt does not involve the non-linear transformation used in Bahdanau\nattention.\n\nIn this tutorial, we will be using Bahdanau attention. However, it would\nbe a valuable exercise to explore modifying the attention mechanism to\nuse Luong attention.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class BahdanauAttention(nn.Module):\n def __init__(self, hidden_size):\n super(BahdanauAttention, self).__init__()\n self.Wa = nn.Linear(hidden_size, hidden_size)\n self.Ua = nn.Linear(hidden_size, hidden_size)\n self.Va = nn.Linear(hidden_size, 1)\n\n def forward(self, query, keys):\n scores = self.Va(torch.tanh(self.Wa(query) + self.Ua(keys)))\n scores = scores.squeeze(2).unsqueeze(1)\n\n weights = F.softmax(scores, dim=-1)\n context = torch.bmm(weights, keys)\n\n return context, weights\n\nclass AttnDecoderRNN(nn.Module):\n def __init__(self, hidden_size, output_size, dropout_p=0.1):\n super(AttnDecoderRNN, self).__init__()\n self.embedding = nn.Embedding(output_size, hidden_size)\n self.attention = BahdanauAttention(hidden_size)\n self.gru = nn.GRU(2 * hidden_size, hidden_size, batch_first=True)\n self.out = nn.Linear(hidden_size, output_size)\n self.dropout = nn.Dropout(dropout_p)\n\n def forward(self, encoder_outputs, encoder_hidden, target_tensor=None):\n batch_size = encoder_outputs.size(0)\n decoder_input = torch.empty(batch_size, 1, dtype=torch.long, device=device).fill_(SOS_token)\n decoder_hidden = encoder_hidden\n decoder_outputs = []\n attentions = []\n\n for i in range(MAX_LENGTH):\n decoder_output, decoder_hidden, attn_weights = self.forward_step(\n decoder_input, decoder_hidden, encoder_outputs\n )\n decoder_outputs.append(decoder_output)\n attentions.append(attn_weights)\n\n if target_tensor is not None:\n # Teacher forcing: Feed the target as the next input\n decoder_input = target_tensor[:, i].unsqueeze(1) # Teacher forcing\n else:\n # Without teacher forcing: use its own predictions as the next input\n _, topi = decoder_output.topk(1)\n decoder_input = topi.squeeze(-1).detach() # detach from history as input\n\n decoder_outputs = torch.cat(decoder_outputs, dim=1)\n decoder_outputs = F.log_softmax(decoder_outputs, dim=-1)\n attentions = torch.cat(attentions, dim=1)\n\n return decoder_outputs, decoder_hidden, attentions\n\n\n def forward_step(self, input, hidden, encoder_outputs):\n embedded = self.dropout(self.embedding(input))\n\n query = hidden.permute(1, 0, 2)\n context, attn_weights = self.attention(query, encoder_outputs)\n input_gru = torch.cat((embedded, context), dim=2)\n\n output, hidden = self.gru(input_gru, hidden)\n output = self.out(output)\n\n return output, hidden, attn_weights" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{=html}\n
NOTE:
\n```\n```{=html}\n
\n```\n```{=html}\n

There are other forms of attention that work around the lengthlimitation by using a relative position approach. Read about \"localattention\" in Effective Approaches to Attention-based Neural MachineTranslation.

\n```\n```{=html}\n
\n```\nTraining\n========\n\nPreparing Training Data\n-----------------------\n\nTo train, for each pair we will need an input tensor (indexes of the\nwords in the input sentence) and target tensor (indexes of the words in\nthe target sentence). While creating these vectors we will append the\nEOS token to both sequences.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def indexesFromSentence(lang, sentence):\n return [lang.word2index[word] for word in sentence.split(' ')]\n\ndef tensorFromSentence(lang, sentence):\n indexes = indexesFromSentence(lang, sentence)\n indexes.append(EOS_token)\n return torch.tensor(indexes, dtype=torch.long, device=device).view(1, -1)\n\ndef tensorsFromPair(pair):\n input_tensor = tensorFromSentence(input_lang, pair[0])\n target_tensor = tensorFromSentence(output_lang, pair[1])\n return (input_tensor, target_tensor)\n\ndef get_dataloader(batch_size):\n input_lang, output_lang, pairs = prepareData('eng', 'fra', True)\n\n n = len(pairs)\n input_ids = np.zeros((n, MAX_LENGTH), dtype=np.int32)\n target_ids = np.zeros((n, MAX_LENGTH), dtype=np.int32)\n\n for idx, (inp, tgt) in enumerate(pairs):\n inp_ids = indexesFromSentence(input_lang, inp)\n tgt_ids = indexesFromSentence(output_lang, tgt)\n inp_ids.append(EOS_token)\n tgt_ids.append(EOS_token)\n input_ids[idx, :len(inp_ids)] = inp_ids\n target_ids[idx, :len(tgt_ids)] = tgt_ids\n\n train_data = TensorDataset(torch.LongTensor(input_ids).to(device),\n torch.LongTensor(target_ids).to(device))\n\n train_sampler = RandomSampler(train_data)\n train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)\n return input_lang, output_lang, train_dataloader" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Training the Model\n==================\n\nTo train we run the input sentence through the encoder, and keep track\nof every output and the latest hidden state. Then the decoder is given\nthe `` token as its first input, and the last hidden state of the\nencoder as its first hidden state.\n\n\\\"Teacher forcing\\\" is the concept of using the real target outputs as\neach next input, instead of using the decoder\\'s guess as the next\ninput. Using teacher forcing causes it to converge faster but [when the\ntrained network is exploited, it may exhibit\ninstability](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.378.4095&rep=rep1&type=pdf).\n\nYou can observe outputs of teacher-forced networks that read with\ncoherent grammar but wander far from the correct translation\n-intuitively it has learned to represent the output grammar and can\n\\\"pick up\\\" the meaning once the teacher tells it the first few words,\nbut it has not properly learned how to create the sentence from the\ntranslation in the first place.\n\nBecause of the freedom PyTorch\\'s autograd gives us, we can randomly\nchoose to use teacher forcing or not with a simple if statement. Turn\n`teacher_forcing_ratio` up to use more of it.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def train_epoch(dataloader, encoder, decoder, encoder_optimizer,\n decoder_optimizer, criterion):\n\n total_loss = 0\n for data in dataloader:\n input_tensor, target_tensor = data\n\n encoder_optimizer.zero_grad()\n decoder_optimizer.zero_grad()\n\n encoder_outputs, encoder_hidden = encoder(input_tensor)\n decoder_outputs, _, _ = decoder(encoder_outputs, encoder_hidden, target_tensor)\n\n loss = criterion(\n decoder_outputs.view(-1, decoder_outputs.size(-1)),\n target_tensor.view(-1)\n )\n loss.backward()\n\n encoder_optimizer.step()\n decoder_optimizer.step()\n\n total_loss += loss.item()\n\n return total_loss / len(dataloader)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a helper function to print time elapsed and estimated time\nremaining given the current time and progress %.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import time\nimport math\n\ndef asMinutes(s):\n m = math.floor(s / 60)\n s -= m * 60\n return '%dm %ds' % (m, s)\n\ndef timeSince(since, percent):\n now = time.time()\n s = now - since\n es = s / (percent)\n rs = es - s\n return '%s (- %s)' % (asMinutes(s), asMinutes(rs))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The whole training process looks like this:\n\n- Start a timer\n- Initialize optimizers and criterion\n- Create set of training pairs\n- Start empty losses array for plotting\n\nThen we call `train` many times and occasionally print the progress (%\nof examples, time so far, estimated time) and average loss.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def train(train_dataloader, encoder, decoder, n_epochs, learning_rate=0.001,\n print_every=100, plot_every=100):\n start = time.time()\n plot_losses = []\n print_loss_total = 0 # Reset every print_every\n plot_loss_total = 0 # Reset every plot_every\n\n encoder_optimizer = optim.Adam(encoder.parameters(), lr=learning_rate)\n decoder_optimizer = optim.Adam(decoder.parameters(), lr=learning_rate)\n criterion = nn.NLLLoss()\n\n for epoch in range(1, n_epochs + 1):\n loss = train_epoch(train_dataloader, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)\n print_loss_total += loss\n plot_loss_total += loss\n\n if epoch % print_every == 0:\n print_loss_avg = print_loss_total / print_every\n print_loss_total = 0\n print('%s (%d %d%%) %.4f' % (timeSince(start, epoch / n_epochs),\n epoch, epoch / n_epochs * 100, print_loss_avg))\n\n if epoch % plot_every == 0:\n plot_loss_avg = plot_loss_total / plot_every\n plot_losses.append(plot_loss_avg)\n plot_loss_total = 0\n\n showPlot(plot_losses)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Plotting results\n================\n\nPlotting is done with matplotlib, using the array of loss values\n`plot_losses` saved while training.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nplt.switch_backend('agg')\nimport matplotlib.ticker as ticker\nimport numpy as np\n\ndef showPlot(points):\n plt.figure()\n fig, ax = plt.subplots()\n # this locator puts ticks at regular intervals\n loc = ticker.MultipleLocator(base=0.2)\n ax.yaxis.set_major_locator(loc)\n plt.plot(points)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Evaluation\n==========\n\nEvaluation is mostly the same as training, but there are no targets so\nwe simply feed the decoder\\'s predictions back to itself for each step.\nEvery time it predicts a word we add it to the output string, and if it\npredicts the EOS token we stop there. We also store the decoder\\'s\nattention outputs for display later.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def evaluate(encoder, decoder, sentence, input_lang, output_lang):\n with torch.no_grad():\n input_tensor = tensorFromSentence(input_lang, sentence)\n\n encoder_outputs, encoder_hidden = encoder(input_tensor)\n decoder_outputs, decoder_hidden, decoder_attn = decoder(encoder_outputs, encoder_hidden)\n\n _, topi = decoder_outputs.topk(1)\n decoded_ids = topi.squeeze()\n\n decoded_words = []\n for idx in decoded_ids:\n if idx.item() == EOS_token:\n decoded_words.append('')\n break\n decoded_words.append(output_lang.index2word[idx.item()])\n return decoded_words, decoder_attn" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can evaluate random sentences from the training set and print out the\ninput, target, and output to make some subjective quality judgements:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def evaluateRandomly(encoder, decoder, n=10):\n for i in range(n):\n pair = random.choice(pairs)\n print('>', pair[0])\n print('=', pair[1])\n output_words, _ = evaluate(encoder, decoder, pair[0], input_lang, output_lang)\n output_sentence = ' '.join(output_words)\n print('<', output_sentence)\n print('')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Training and Evaluating\n=======================\n\nWith all these helper functions in place (it looks like extra work, but\nit makes it easier to run multiple experiments) we can actually\ninitialize a network and start training.\n\nRemember that the input sentences were heavily filtered. For this small\ndataset we can use relatively small networks of 256 hidden nodes and a\nsingle GRU layer. After about 40 minutes on a MacBook CPU we\\'ll get\nsome reasonable results.\n\n```{=html}\n
NOTE:
\n```\n```{=html}\n
\n```\n```{=html}\n

If you run this notebook you can train, interrupt the kernel,evaluate, and continue training later. Comment out the lines where theencoder and decoder are initialized and run trainIters again.

\n```\n```{=html}\n
\n```\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "hidden_size = 128\nbatch_size = 32\n\ninput_lang, output_lang, train_dataloader = get_dataloader(batch_size)\n\nencoder = EncoderRNN(input_lang.n_words, hidden_size).to(device)\ndecoder = AttnDecoderRNN(hidden_size, output_lang.n_words).to(device)\n\ntrain(train_dataloader, encoder, decoder, 80, print_every=5, plot_every=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Set dropout layers to `eval` mode\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "encoder.eval()\ndecoder.eval()\nevaluateRandomly(encoder, decoder)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualizing Attention\n=====================\n\nA useful property of the attention mechanism is its highly interpretable\noutputs. Because it is used to weight specific encoder outputs of the\ninput sequence, we can imagine looking where the network is focused most\nat each time step.\n\nYou could simply run `plt.matshow(attentions)` to see attention output\ndisplayed as a matrix. For a better viewing experience we will do the\nextra work of adding axes and labels:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def showAttention(input_sentence, output_words, attentions):\n fig = plt.figure()\n ax = fig.add_subplot(111)\n cax = ax.matshow(attentions.cpu().numpy(), cmap='bone')\n fig.colorbar(cax)\n\n # Set up axes\n ax.set_xticklabels([''] + input_sentence.split(' ') +\n [''], rotation=90)\n ax.set_yticklabels([''] + output_words)\n\n # Show label at every tick\n ax.xaxis.set_major_locator(ticker.MultipleLocator(1))\n ax.yaxis.set_major_locator(ticker.MultipleLocator(1))\n\n plt.show()\n\n\ndef evaluateAndShowAttention(input_sentence):\n output_words, attentions = evaluate(encoder, decoder, input_sentence, input_lang, output_lang)\n print('input =', input_sentence)\n print('output =', ' '.join(output_words))\n showAttention(input_sentence, output_words, attentions[0, :len(output_words), :])\n\n\nevaluateAndShowAttention('il n est pas aussi grand que son pere')\n\nevaluateAndShowAttention('je suis trop fatigue pour conduire')\n\nevaluateAndShowAttention('je suis desole si c est une question idiote')\n\nevaluateAndShowAttention('je suis reellement fiere de vous')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Exercises\n=========\n\n- Try with a different dataset\n - Another language pair\n - Human \u2192 Machine (e.g. IOT commands)\n - Chat \u2192 Response\n - Question \u2192 Answer\n- Replace the embeddings with pretrained word embeddings such as\n `word2vec` or `GloVe`\n- Try with more layers, more hidden units, and more sentences. Compare\n the training time and results.\n- If you use a translation file where pairs have two of the same\n phrase (`I am test \\t I am test`), you can use this as an\n autoencoder. Try this:\n - Train as an autoencoder\n - Save only the Encoder network\n - Train a new Decoder for translation from there\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 0 }