{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# For tips on running notebooks in Google Colab, see\n# https://pytorch.org/tutorials/beginner/colab\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "[Learn the Basics](intro.html) \\|\\|\n[Quickstart](quickstart_tutorial.html) \\|\\| **Tensors** \\|\\| [Datasets &\nDataLoaders](data_tutorial.html) \\|\\|\n[Transforms](transforms_tutorial.html) \\|\\| [Build\nModel](buildmodel_tutorial.html) \\|\\|\n[Autograd](autogradqs_tutorial.html) \\|\\|\n[Optimization](optimization_tutorial.html) \\|\\| [Save & Load\nModel](saveloadrun_tutorial.html)\n\nTensors\n=======\n\nTensors are a specialized data structure that are very similar to arrays\nand matrices. In PyTorch, we use tensors to encode the inputs and\noutputs of a model, as well as the model's parameters.\n\nTensors are similar to [NumPy's](https://numpy.org/) ndarrays, except\nthat tensors can run on GPUs or other hardware accelerators. In fact,\ntensors and NumPy arrays can often share the same underlying memory,\neliminating the need to copy data (see\n`bridge-to-np-label`{.interpreted-text role=\"ref\"}). Tensors are also\noptimized for automatic differentiation (we\\'ll see more about that\nlater in the [Autograd](autogradqs_tutorial.html) section). If you're\nfamiliar with ndarrays, you'll be right at home with the Tensor API. If\nnot, follow along!\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import torch\nimport numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Initializing a Tensor\n=====================\n\nTensors can be initialized in various ways. Take a look at the following\nexamples:\n\n**Directly from data**\n\nTensors can be created directly from data. The data type is\nautomatically inferred.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data = [[1, 2],[3, 4]]\nx_data = torch.tensor(data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**From a NumPy array**\n\nTensors can be created from NumPy arrays (and vice versa - see\n`bridge-to-np-label`{.interpreted-text role=\"ref\"}).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "np_array = np.array(data)\nx_np = torch.from_numpy(np_array)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**From another tensor:**\n\nThe new tensor retains the properties (shape, datatype) of the argument\ntensor, unless explicitly overridden.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "x_ones = torch.ones_like(x_data) # retains the properties of x_data\nprint(f\"Ones Tensor: \\n {x_ones} \\n\")\n\nx_rand = torch.rand_like(x_data, dtype=torch.float) # overrides the datatype of x_data\nprint(f\"Random Tensor: \\n {x_rand} \\n\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**With random or constant values:**\n\n`shape` is a tuple of tensor dimensions. In the functions below, it\ndetermines the dimensionality of the output tensor.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "shape = (2,3,)\nrand_tensor = torch.rand(shape)\nones_tensor = torch.ones(shape)\nzeros_tensor = torch.zeros(shape)\n\nprint(f\"Random Tensor: \\n {rand_tensor} \\n\")\nprint(f\"Ones Tensor: \\n {ones_tensor} \\n\")\nprint(f\"Zeros Tensor: \\n {zeros_tensor}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "------------------------------------------------------------------------\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Attributes of a Tensor\n======================\n\nTensor attributes describe their shape, datatype, and the device on\nwhich they are stored.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "tensor = torch.rand(3,4)\n\nprint(f\"Shape of tensor: {tensor.shape}\")\nprint(f\"Datatype of tensor: {tensor.dtype}\")\nprint(f\"Device tensor is stored on: {tensor.device}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "------------------------------------------------------------------------\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Operations on Tensors\n=====================\n\nOver 1200 tensor operations, including arithmetic, linear algebra,\nmatrix manipulation (transposing, indexing, slicing), sampling and more\nare comprehensively described\n[here](https://pytorch.org/docs/stable/torch.html).\n\nEach of these operations can be run on the CPU and\n[Accelerator](https://pytorch.org/docs/stable/torch.html#accelerators)\nsuch as CUDA, MPS, MTIA, or XPU. If you're using Colab, allocate an\naccelerator by going to Runtime \\> Change runtime type \\> GPU.\n\nBy default, tensors are created on the CPU. We need to explicitly move\ntensors to the accelerator using `.to` method (after checking for\naccelerator availability). Keep in mind that copying large tensors\nacross devices can be expensive in terms of time and memory!\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# We move our tensor to the current accelerator if available\nif torch.accelerator.is_available():\n tensor = tensor.to(torch.accelerator.current_accelerator())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Try out some of the operations from the list. If you\\'re familiar with\nthe NumPy API, you\\'ll find the Tensor API a breeze to use.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Standard numpy-like indexing and slicing:**\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "tensor = torch.ones(4, 4)\nprint(f\"First row: {tensor[0]}\")\nprint(f\"First column: {tensor[:, 0]}\")\nprint(f\"Last column: {tensor[..., -1]}\")\ntensor[:,1] = 0\nprint(tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Joining tensors** You can use `torch.cat` to concatenate a sequence of\ntensors along a given dimension. See also\n[torch.stack](https://pytorch.org/docs/stable/generated/torch.stack.html),\nanother tensor joining operator that is subtly different from\n`torch.cat`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "t1 = torch.cat([tensor, tensor, tensor], dim=1)\nprint(t1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Arithmetic operations**\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# This computes the matrix multiplication between two tensors. y1, y2, y3 will have the same value\n# ``tensor.T`` returns the transpose of a tensor\ny1 = tensor @ tensor.T\ny2 = tensor.matmul(tensor.T)\n\ny3 = torch.rand_like(y1)\ntorch.matmul(tensor, tensor.T, out=y3)\n\n\n# This computes the element-wise product. z1, z2, z3 will have the same value\nz1 = tensor * tensor\nz2 = tensor.mul(tensor)\n\nz3 = torch.rand_like(tensor)\ntorch.mul(tensor, tensor, out=z3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Single-element tensors** If you have a one-element tensor, for example\nby aggregating all values of a tensor into one value, you can convert it\nto a Python numerical value using `item()`:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "agg = tensor.sum()\nagg_item = agg.item()\nprint(agg_item, type(agg_item))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**In-place operations** Operations that store the result into the\noperand are called in-place. They are denoted by a `_` suffix. For\nexample: `x.copy_(y)`, `x.t_()`, will change `x`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(f\"{tensor} \\n\")\ntensor.add_(5)\nprint(tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```{=html}\n
NOTE:
\n```\n```{=html}\n
\n```\n```{=html}\n

In-place operations save some memory, but can be problematic when computing derivatives because of an immediate lossof history. Hence, their use is discouraged.

\n```\n```{=html}\n
\n```\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "------------------------------------------------------------------------\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Bridge with NumPy {#bridge-to-np-label}\n=================\n\nTensors on the CPU and NumPy arrays can share their underlying memory\nlocations, and changing one will change the other.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Tensor to NumPy array\n=====================\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "t = torch.ones(5)\nprint(f\"t: {t}\")\nn = t.numpy()\nprint(f\"n: {n}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A change in the tensor reflects in the NumPy array.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "t.add_(1)\nprint(f\"t: {t}\")\nprint(f\"n: {n}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "NumPy array to Tensor\n=====================\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "n = np.ones(5)\nt = torch.from_numpy(n)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Changes in the NumPy array reflects in the tensor.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "np.add(n, 1, out=n)\nprint(f\"t: {t}\")\nprint(f\"n: {n}\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 0 }