{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# For tips on running notebooks in Google Colab, see\n# https://pytorch.org/tutorials/beginner/colab\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Introduction](introyt1_tutorial.html) \\|\\| **Tensors** \\|\\|\n[Autograd](autogradyt_tutorial.html) \\|\\| [Building\nModels](modelsyt_tutorial.html) \\|\\| [TensorBoard\nSupport](tensorboardyt_tutorial.html) \\|\\| [Training\nModels](trainingyt.html) \\|\\| [Model Understanding](captumyt.html)\n\nIntroduction to PyTorch Tensors\n===============================\n\nFollow along with the video below or on\n[youtube](https://www.youtube.com/watch?v=r7QDUPb2dCM).\n\n``` {.python .jupyter-code-cell}\nfrom IPython.display import display, HTML\nhtml_code = \"\"\"\n
torch.tensor()
creates a copy of the data.
The following cell throws a run-time error. This is intentional.
a = torch.rand(2, 3)\nb = torch.rand(3, 2)\nprint(a * b)
\n```\n```{=html}\nIf you are familiar with broadcasting semantics in NumPyndarrays, you\u2019ll find the same rules apply here.
\n```\n```{=html}\nThe following cell throws a run-time error. This is intentional.
a = torch.ones(4, 3, 2)\nb = a * torch.rand(4, 3) # dimensions must match last-to-first
\nc = a * torch.rand( 2, 3) # both 3rd & 2nd dims different
\nd = a * torch.rand((0, )) # can't broadcast with an empty tensor
\n```\n```{=html}\nIf you do not have an accelerator, the executable cells in this section will not execute anyaccelerator-related code.
\n```\n```{=html}\nThe (6 * 20 * 20,)
argument in the final line of the cellabove is because PyTorch expects a when specifying atensor shape - but when the shape is the first argument of a method, itlets us cheat and just use a series of integers. Here, we had to add theparentheses and comma to convince the method that this is really aone-element tuple.