{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# For tips on running notebooks in Google Colab, see\n# https://pytorch.org/tutorials/beginner/colab\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tensors\n=======\n\nTensors are a specialized data structure that are very similar to arrays\nand matrices. In PyTorch, we use tensors to encode the inputs and\noutputs of a model, as well as the model's parameters.\n\nTensors are similar to NumPy's ndarrays, except that tensors can run on\nGPUs or other specialized hardware to accelerate computing. If you're\nfamiliar with ndarrays, you'll be right at home with the Tensor API. If\nnot, follow along in this quick API walkthrough.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import torch\nimport numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tensor Initialization\n=====================\n\nTensors can be initialized in various ways. Take a look at the following\nexamples:\n\n**Directly from data**\n\nTensors can be created directly from data. The data type is\nautomatically inferred.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data = [[1, 2], [3, 4]]\nx_data = torch.tensor(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**From a NumPy array**\n\nTensors can be created from NumPy arrays (and vice versa - see\n`bridge-to-np-label`{.interpreted-text role=\"ref\"}).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "np_array = np.array(data)\nx_np = torch.from_numpy(np_array)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**From another tensor:**\n\nThe new tensor retains the properties (shape, datatype) of the argument\ntensor, unless explicitly overridden.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "x_ones = torch.ones_like(x_data) # retains the properties of x_data\nprint(f\"Ones Tensor: \\n {x_ones} \\n\")\n\nx_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data\nprint(f\"Random Tensor: \\n {x_rand} \\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**With random or constant values:**\n\n`shape` is a tuple of tensor dimensions. In the functions below, it\ndetermines the dimensionality of the output tensor.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "shape = (2, 3,)\nrand_tensor = torch.rand(shape)\nones_tensor = torch.ones(shape)\nzeros_tensor = torch.zeros(shape)\n\nprint(f\"Random Tensor: \\n {rand_tensor} \\n\")\nprint(f\"Ones Tensor: \\n {ones_tensor} \\n\")\nprint(f\"Zeros Tensor: \\n {zeros_tensor}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "------------------------------------------------------------------------\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tensor Attributes\n=================\n\nTensor attributes describe their shape, datatype, and the device on\nwhich they are stored.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "tensor = torch.rand(3, 4)\n\nprint(f\"Shape of tensor: {tensor.shape}\")\nprint(f\"Datatype of tensor: {tensor.dtype}\")\nprint(f\"Device tensor is stored on: {tensor.device}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "------------------------------------------------------------------------\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tensor Operations\n=================\n\nOver 100 tensor operations, including transposing, indexing, slicing,\nmathematical operations, linear algebra, random sampling, and more are\ncomprehensively described\n[here](https://pytorch.org/docs/stable/torch.html).\n\nEach of them can be run on the GPU (at typically higher speeds than on a\nCPU). If you're using Colab, allocate a GPU by going to Edit \\> Notebook\nSettings.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# We move our tensor to the GPU if available\nif torch.cuda.is_available():\n tensor = tensor.to('cuda')\n print(f\"Device tensor is stored on: {tensor.device}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try out some of the operations from the list. If you\\'re familiar with\nthe NumPy API, you\\'ll find the Tensor API a breeze to use.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Standard numpy-like indexing and slicing:**\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "tensor = torch.ones(4, 4)\ntensor[:,1] = 0\nprint(tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Joining tensors** You can use `torch.cat` to concatenate a sequence of\ntensors along a given dimension. See also\n[torch.stack](https://pytorch.org/docs/stable/generated/torch.stack.html),\nanother tensor joining op that is subtly different from `torch.cat`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "t1 = torch.cat([tensor, tensor, tensor], dim=1)\nprint(t1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Multiplying tensors**\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# This computes the element-wise product\nprint(f\"tensor.mul(tensor) \\n {tensor.mul(tensor)} \\n\")\n# Alternative syntax:\nprint(f\"tensor * tensor \\n {tensor * tensor}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This computes the matrix multiplication between two tensors\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(f\"tensor.matmul(tensor.T) \\n {tensor.matmul(tensor.T)} \\n\")\n# Alternative syntax:\nprint(f\"tensor @ tensor.T \\n {tensor @ tensor.T}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**In-place operations** Operations that have a `_` suffix are in-place.\nFor example: `x.copy_(y)`, `x.t_()`, will change `x`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(tensor, \"\\n\")\ntensor.add_(5)\nprint(tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{=html}\n
NOTE:
\n```\n```{=html}\n
\n```\n```{=html}\n

In-place operations save some memory, but can be problematic when computing derivatives because of an immediate lossof history. Hence, their use is discouraged.

\n```\n```{=html}\n
\n```\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "------------------------------------------------------------------------\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Bridge with NumPy {#bridge-to-np-label}\n=================\n\nTensors on the CPU and NumPy arrays can share their underlying memory\nlocations, and changing one will change the other.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tensor to NumPy array\n=====================\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "t = torch.ones(5)\nprint(f\"t: {t}\")\nn = t.numpy()\nprint(f\"n: {n}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A change in the tensor reflects in the NumPy array.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "t.add_(1)\nprint(f\"t: {t}\")\nprint(f\"n: {n}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NumPy array to Tensor\n=====================\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "n = np.ones(5)\nt = torch.from_numpy(n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Changes in the NumPy array reflects in the tensor.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "np.add(n, 1, out=n)\nprint(f\"t: {t}\")\nprint(f\"n: {n}\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 0 }