{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# For tips on running notebooks in Google Colab, see\n# https://pytorch.org/tutorials/beginner/colab\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Introduction** \\|\\| [Tensors](tensors_deeper_tutorial.html) \\|\\|\n[Autograd](autogradyt_tutorial.html) \\|\\| [Building\nModels](modelsyt_tutorial.html) \\|\\| [TensorBoard\nSupport](tensorboardyt_tutorial.html) \\|\\| [Training\nModels](trainingyt.html) \\|\\| [Model Understanding](captumyt.html)\n\nIntroduction to PyTorch\n=======================\n\nFollow along with the video below or on\n[youtube](https://www.youtube.com/watch?v=IC0_FRiX-sw).\n\n``` {.python .jupyter-code-cell}\nfrom IPython.display import display, HTML\nhtml_code = \"\"\"\n
\n \n
\n\"\"\"\ndisplay(HTML(html_code))\n```\n\nPyTorch Tensors\n---------------\n\nFollow along with the video beginning at\n[03:50](https://www.youtube.com/watch?v=IC0_FRiX-sw&t=230s).\n\nFirst, we'll import pytorch.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import torch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's see a few basic tensor manipulations. First, just a few of the\nways to create tensors:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "z = torch.zeros(5, 3)\nprint(z)\nprint(z.dtype)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Above, we create a 5x3 matrix filled with zeros, and query its datatype\nto find out that the zeros are 32-bit floating point numbers, which is\nthe default PyTorch.\n\nWhat if you wanted integers instead? You can always override the\ndefault:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "i = torch.ones((5, 3), dtype=torch.int16)\nprint(i)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can see that when we do change the default, the tensor helpfully\nreports this when printed.\n\nIt's common to initialize learning weights randomly, often with a\nspecific seed for the PRNG for reproducibility of results:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "torch.manual_seed(1729)\nr1 = torch.rand(2, 2)\nprint('A random tensor:')\nprint(r1)\n\nr2 = torch.rand(2, 2)\nprint('\\nA different random tensor:')\nprint(r2) # new values\n\ntorch.manual_seed(1729)\nr3 = torch.rand(2, 2)\nprint('\\nShould match r1:')\nprint(r3) # repeats values of r1 because of re-seed" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "PyTorch tensors perform arithmetic operations intuitively. Tensors of\nsimilar shapes may be added, multiplied, etc. Operations with scalars\nare distributed over the tensor:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "ones = torch.ones(2, 3)\nprint(ones)\n\ntwos = torch.ones(2, 3) * 2 # every element is multiplied by 2\nprint(twos)\n\nthrees = ones + twos # addition allowed because shapes are similar\nprint(threes) # tensors are added element-wise\nprint(threes.shape) # this has the same dimensions as input tensors\n\nr1 = torch.rand(2, 3)\nr2 = torch.rand(3, 2)\n# uncomment this line to get a runtime error\n# r3 = r1 + r2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's a small sample of the mathematical operations available:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "r = (torch.rand(2, 2) - 0.5) * 2 # values between -1 and 1\nprint('A random matrix, r:')\nprint(r)\n\n# Common mathematical operations are supported:\nprint('\\nAbsolute value of r:')\nprint(torch.abs(r))\n\n# ...as are trigonometric functions:\nprint('\\nInverse sine of r:')\nprint(torch.asin(r))\n\n# ...and linear algebra operations like determinant and singular value decomposition\nprint('\\nDeterminant of r:')\nprint(torch.det(r))\nprint('\\nSingular value decomposition of r:')\nprint(torch.svd(r))\n\n# ...and statistical and aggregate operations:\nprint('\\nAverage and standard deviation of r:')\nprint(torch.std_mean(r))\nprint('\\nMaximum value of r:')\nprint(torch.max(r))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There's a good deal more to know about the power of PyTorch tensors,\nincluding how to set them up for parallel computations on GPU - we'll be\ngoing into more depth in another video.\n\nPyTorch Models\n==============\n\nFollow along with the video beginning at\n[10:00](https://www.youtube.com/watch?v=IC0_FRiX-sw&t=600s).\n\nLet's talk about how we can express models in PyTorch\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import torch # for all things PyTorch\nimport torch.nn as nn # for torch.nn.Module, the parent object for PyTorch models\nimport torch.nn.functional as F # for the activation function" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![](https://pytorch.org/tutorials/_static/img/mnist.png)\n\n*Figure: LeNet-5*\n\nAbove is a diagram of LeNet-5, one of the earliest convolutional neural\nnets, and one of the drivers of the explosion in Deep Learning. It was\nbuilt to read small images of handwritten numbers (the MNIST dataset),\nand correctly classify which digit was represented in the image.\n\nHere's the abridged version of how it works:\n\n- Layer C1 is a convolutional layer, meaning that it scans the input\n image for features it learned during training. It outputs a map of\n where it saw each of its learned features in the image. This\n \"activation map\" is downsampled in layer S2.\n- Layer C3 is another convolutional layer, this time scanning C1's\n activation map for *combinations* of features. It also puts out an\n activation map describing the spatial locations of these feature\n combinations, which is downsampled in layer S4.\n- Finally, the fully-connected layers at the end, F5, F6, and OUTPUT,\n are a *classifier* that takes the final activation map, and\n classifies it into one of ten bins representing the 10 digits.\n\nHow do we express this simple neural network in code?\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class LeNet(nn.Module):\n\n def __init__(self):\n super(LeNet, self).__init__()\n # 1 input image channel (black & white), 6 output channels, 5x5 square convolution\n # kernel\n self.conv1 = nn.Conv2d(1, 6, 5)\n self.conv2 = nn.Conv2d(6, 16, 5)\n # an affine operation: y = Wx + b\n self.fc1 = nn.Linear(16 * 5 * 5, 120) # 5*5 from image dimension\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n # Max pooling over a (2, 2) window\n x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))\n # If the size is a square you can only specify a single number\n x = F.max_pool2d(F.relu(self.conv2(x)), 2)\n x = x.view(-1, self.num_flat_features(x))\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x\n\n def num_flat_features(self, x):\n size = x.size()[1:] # all dimensions except the batch dimension\n num_features = 1\n for s in size:\n num_features *= s\n return num_features" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Looking over this code, you should be able to spot some structural\nsimilarities with the diagram above.\n\nThis demonstrates the structure of a typical PyTorch model:\n\n- It inherits from `torch.nn.Module` - modules may be nested - in\n fact, even the `Conv2d` and `Linear` layer classes inherit from\n `torch.nn.Module`.\n- A model will have an `__init__()` function, where it instantiates\n its layers, and loads any data artifacts it might need (e.g., an NLP\n model might load a vocabulary).\n- A model will have a `forward()` function. This is where the actual\n computation happens: An input is passed through the network layers\n and various functions to generate an output.\n- Other than that, you can build out your model class like any other\n Python class, adding whatever properties and methods you need to\n support your model's computation.\n\nLet's instantiate this object and run a sample input through it.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "net = LeNet()\nprint(net) # what does the object tell us about itself?\n\ninput = torch.rand(1, 1, 32, 32) # stand-in for a 32x32 black & white image\nprint('\\nImage batch shape:')\nprint(input.shape)\n\noutput = net(input) # we don't call forward() directly\nprint('\\nRaw output:')\nprint(output)\nprint(output.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are a few important things happening above:\n\nFirst, we instantiate the `LeNet` class, and we print the `net` object.\nA subclass of `torch.nn.Module` will report the layers it has created\nand their shapes and parameters. This can provide a handy overview of a\nmodel if you want to get the gist of its processing.\n\nBelow that, we create a dummy input representing a 32x32 image with 1\ncolor channel. Normally, you would load an image tile and convert it to\na tensor of this shape.\n\nYou may have noticed an extra dimension to our tensor - the *batch\ndimension.* PyTorch models assume they are working on *batches* of data\n- for example, a batch of 16 of our image tiles would have the shape\n`(16, 1, 32, 32)`. Since we're only using one image, we create a batch\nof 1 with shape `(1, 1, 32, 32)`.\n\nWe ask the model for an inference by calling it like a function:\n`net(input)`. The output of this call represents the model's confidence\nthat the input represents a particular digit. (Since this instance of\nthe model hasn't learned anything yet, we shouldn't expect to see any\nsignal in the output.) Looking at the shape of `output`, we can see that\nit also has a batch dimension, the size of which should always match the\ninput batch dimension. If we had passed in an input batch of 16\ninstances, `output` would have a shape of `(16, 10)`.\n\nDatasets and Dataloaders\n========================\n\nFollow along with the video beginning at\n[14:00](https://www.youtube.com/watch?v=IC0_FRiX-sw&t=840s).\n\nBelow, we're going to demonstrate using one of the ready-to-download,\nopen-access datasets from TorchVision, how to transform the images for\nconsumption by your model, and how to use the DataLoader to feed batches\nof data to your model.\n\nThe first thing we need to do is transform our incoming images into a\nPyTorch tensor.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#%matplotlib inline\n\nimport torch\nimport torchvision\nimport torchvision.transforms as transforms\n\ntransform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2470, 0.2435, 0.2616))])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we specify two transformations for our input:\n\n- `transforms.ToTensor()` converts images loaded by Pillow into\n PyTorch tensors.\n\n- `transforms.Normalize()` adjusts the values of the tensor so that\n their average is zero and their standard deviation is 1.0. Most\n activation functions have their strongest gradients around x = 0, so\n centering our data there can speed learning. The values passed to\n the transform are the means (first tuple) and the standard\n deviations (second tuple) of the rgb values of the images in the\n dataset. You can calculate these values yourself by running these\n few lines of code:\n\n from torch.utils.data import ConcatDataset\n transform = transforms.Compose([transforms.ToTensor()])\n trainset = torchvision.datasets.CIFAR10(root='./data', train=True,\n download=True, transform=transform)\n\n # stack all train images together into a tensor of shape \n # (50000, 3, 32, 32)\n x = torch.stack([sample[0] for sample in ConcatDataset([trainset])])\n\n # get the mean of each channel \n mean = torch.mean(x, dim=(0,2,3)) # tensor([0.4914, 0.4822, 0.4465])\n std = torch.std(x, dim=(0,2,3)) # tensor([0.2470, 0.2435, 0.2616]) \n\nThere are many more transforms available, including cropping, centering,\nrotation, and reflection.\n\nNext, we'll create an instance of the CIFAR10 dataset. This is a set of\n32x32 color image tiles representing 10 classes of objects: 6 of animals\n(bird, cat, deer, dog, frog, horse) and 4 of vehicles (airplane,\nautomobile, ship, truck):\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "trainset = torchvision.datasets.CIFAR10(root='./data', train=True,\n download=True, transform=transform)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{=html}\n
NOTE:
\n```\n```{=html}\n
\n```\n```{=html}\n

When you run the cell above, it may take a little time for thedataset to download.

\n```\n```{=html}\n
\n```\nThis is an example of creating a dataset object in PyTorch. Downloadable\ndatasets (like CIFAR-10 above) are subclasses of\n`torch.utils.data.Dataset`. `Dataset` classes in PyTorch include the\ndownloadable datasets in TorchVision, Torchtext, and TorchAudio, as well\nas utility dataset classes such as `torchvision.datasets.ImageFolder`,\nwhich will read a folder of labeled images. You can also create your own\nsubclasses of `Dataset`.\n\nWhen we instantiate our dataset, we need to tell it a few things:\n\n- The filesystem path to where we want the data to go.\n- Whether or not we are using this set for training; most datasets\n will be split into training and test subsets.\n- Whether we would like to download the dataset if we haven't already.\n- The transformations we want to apply to the data.\n\nOnce your dataset is ready, you can give it to the `DataLoader`:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,\n shuffle=True, num_workers=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A `Dataset` subclass wraps access to the data, and is specialized to the\ntype of data it's serving. The `DataLoader` knows *nothing* about the\ndata, but organizes the input tensors served by the `Dataset` into\nbatches with the parameters you specify.\n\nIn the example above, we've asked a `DataLoader` to give us batches of 4\nimages from `trainset`, randomizing their order (`shuffle=True`), and we\ntold it to spin up two workers to load data from disk.\n\nIt's good practice to visualize the batches your `DataLoader` serves:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nclasses = ('plane', 'car', 'bird', 'cat',\n 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')\n\ndef imshow(img):\n img = img / 2 + 0.5 # unnormalize\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n\n\n# get some random training images\ndataiter = iter(trainloader)\nimages, labels = next(dataiter)\n\n# show images\nimshow(torchvision.utils.make_grid(images))\n# print labels\nprint(' '.join('%5s' % classes[labels[j]] for j in range(4)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Running the above cell should show you a strip of four images, and the\ncorrect label for each.\n\nTraining Your PyTorch Model\n===========================\n\nFollow along with the video beginning at\n[17:10](https://www.youtube.com/watch?v=IC0_FRiX-sw&t=1030s).\n\nLet's put all the pieces together, and train a model:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "#%matplotlib inline\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nimport torchvision\nimport torchvision.transforms as transforms\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, we'll need training and test datasets. If you haven't already,\nrun the cell below to make sure the dataset is downloaded. (It may take\na minute.)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "transform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\ntrainset = torchvision.datasets.CIFAR10(root='./data', train=True,\n download=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=4,\n shuffle=True, num_workers=2)\n\ntestset = torchvision.datasets.CIFAR10(root='./data', train=False,\n download=True, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=4,\n shuffle=False, num_workers=2)\n\nclasses = ('plane', 'car', 'bird', 'cat',\n 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll run our check on the output from `DataLoader`:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\nimport numpy as np\n\n# functions to show an image\n\n\ndef imshow(img):\n img = img / 2 + 0.5 # unnormalize\n npimg = img.numpy()\n plt.imshow(np.transpose(npimg, (1, 2, 0)))\n\n\n# get some random training images\ndataiter = iter(trainloader)\nimages, labels = next(dataiter)\n\n# show images\nimshow(torchvision.utils.make_grid(images))\n# print labels\nprint(' '.join('%5s' % classes[labels[j]] for j in range(4)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is the model we'll train. If it looks familiar, that's because it's\na variant of LeNet - discussed earlier in this video - adapted for\n3-color images.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(3, 6, 5)\n self.pool = nn.MaxPool2d(2, 2)\n self.conv2 = nn.Conv2d(6, 16, 5)\n self.fc1 = nn.Linear(16 * 5 * 5, 120)\n self.fc2 = nn.Linear(120, 84)\n self.fc3 = nn.Linear(84, 10)\n\n def forward(self, x):\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n x = x.view(-1, 16 * 5 * 5)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x\n\n\nnet = Net()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last ingredients we need are a loss function and an optimizer:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "criterion = nn.CrossEntropyLoss()\noptimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The loss function, as discussed earlier in this video, is a measure of\nhow far from our ideal output the model's prediction was. Cross-entropy\nloss is a typical loss function for classification models like ours.\n\nThe **optimizer** is what drives the learning. Here we have created an\noptimizer that implements *stochastic gradient descent,* one of the more\nstraightforward optimization algorithms. Besides parameters of the\nalgorithm, like the learning rate (`lr`) and momentum, we also pass in\n`net.parameters()`, which is a collection of all the learning weights in\nthe model - which is what the optimizer adjusts.\n\nFinally, all of this is assembled into the training loop. Go ahead and\nrun this cell, as it will likely take a few minutes to execute:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "for epoch in range(2): # loop over the dataset multiple times\n\n running_loss = 0.0\n for i, data in enumerate(trainloader, 0):\n # get the inputs\n inputs, labels = data\n\n # zero the parameter gradients\n optimizer.zero_grad()\n\n # forward + backward + optimize\n outputs = net(inputs)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n\n # print statistics\n running_loss += loss.item()\n if i % 2000 == 1999: # print every 2000 mini-batches\n print('[%d, %5d] loss: %.3f' %\n (epoch + 1, i + 1, running_loss / 2000))\n running_loss = 0.0\n\nprint('Finished Training')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we are doing only **2 training epochs** (line 1) - that is, two\npasses over the training dataset. Each pass has an inner loop that\n**iterates over the training data** (line 4), serving batches of\ntransformed input images and their correct labels.\n\n**Zeroing the gradients** (line 9) is an important step. Gradients are\naccumulated over a batch; if we do not reset them for every batch, they\nwill keep accumulating, which will provide incorrect gradient values,\nmaking learning impossible.\n\nIn line 12, we **ask the model for its predictions** on this batch. In\nthe following line (13), we compute the loss - the difference between\n`outputs` (the model prediction) and `labels` (the correct output).\n\nIn line 14, we do the `backward()` pass, and calculate the gradients\nthat will direct the learning.\n\nIn line 15, the optimizer performs one learning step - it uses the\ngradients from the `backward()` call to nudge the learning weights in\nthe direction it thinks will reduce the loss.\n\nThe remainder of the loop does some light reporting on the epoch number,\nhow many training instances have been completed, and what the collected\nloss is over the training loop.\n\n**When you run the cell above,** you should see something like this:\n\n``` {.sh}\n[1, 2000] loss: 2.235\n[1, 4000] loss: 1.940\n[1, 6000] loss: 1.713\n[1, 8000] loss: 1.573\n[1, 10000] loss: 1.507\n[1, 12000] loss: 1.442\n[2, 2000] loss: 1.378\n[2, 4000] loss: 1.364\n[2, 6000] loss: 1.349\n[2, 8000] loss: 1.319\n[2, 10000] loss: 1.284\n[2, 12000] loss: 1.267\nFinished Training\n```\n\nNote that the loss is monotonically descending, indicating that our\nmodel is continuing to improve its performance on the training dataset.\n\nAs a final step, we should check that the model is actually doing\n*general* learning, and not simply \"memorizing\" the dataset. This is\ncalled **overfitting,** and usually indicates that the dataset is too\nsmall (not enough examples for general learning), or that the model has\nmore learning parameters than it needs to correctly model the dataset.\n\nThis is the reason datasets are split into training and test subsets -to\ntest the generality of the model, we ask it to make predictions on data\nit hasn't trained on:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "correct = 0\ntotal = 0\nwith torch.no_grad():\n for data in testloader:\n images, labels = data\n outputs = net(images)\n _, predicted = torch.max(outputs.data, 1)\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\nprint('Accuracy of the network on the 10000 test images: %d %%' % (\n 100 * correct / total))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you followed along, you should see that the model is roughly 50%\naccurate at this point. That's not exactly state-of-the-art, but it's\nfar better than the 10% accuracy we'd expect from a random output. This\ndemonstrates that some general learning did happen in the model.\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 0 }