{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# For tips on running notebooks in Google Colab, see\n# https://codelin.vip/beginner/colab\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Profiling your PyTorch Module\n=============================\n\n**Author:** [Suraj Subramanian](https://github.com/suraj813)\n\nPyTorch includes a profiler API that is useful to identify the time and\nmemory costs of various PyTorch operations in your code. Profiler can be\neasily integrated in your code, and the results can be printed as a\ntable or returned in a JSON trace file.\n\n```{=html}\n
Profiler supports multithreaded models. Profiler runs in thesame thread as the operation but it will also profile child operatorsthat might run in another thread. Concurrently-running profilers will bescoped to their own thread to prevent mixing of results.
\n```\n```{=html}\nPyTorch 1.8 introduces the new API that will replace the older profiler APIin the future releases. Check the new API at this page.
\n```\n```{=html}\nwith_stack=True
incurs an additional overhead, and is better suited for investigating code.Remember to remove it if you are benchmarking performance.
When running profiler in a notebook, you might see entries like <ipython-input-18-193a910735e8>(13): forward
instead of filenames in the stacktrace. These correspond to <notebook-cell>(line number): calling-function
.