{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# For tips on running notebooks in Google Colab, see\n# https://pytorch.org/tutorials/beginner/colab\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Model ensembling\n================\n\nThis tutorial illustrates how to vectorize model ensembling using\n`torch.vmap`.\n\nWhat is model ensembling?\n-------------------------\n\nModel ensembling combines the predictions from multiple models together.\nTraditionally this is done by running each model on some inputs\nseparately and then combining the predictions. However, if you\\'re\nrunning models with the same architecture, then it may be possible to\ncombine them together using `torch.vmap`. `vmap` is a function transform\nthat maps functions across dimensions of the input tensors. One of its\nuse cases is eliminating for-loops and speeding them up through\nvectorization.\n\nLet\\'s demonstrate how to do this using an ensemble of simple MLPs.\n\n```{=html}\n
This tutorial requires PyTorch 2.0.0 or later.
\n```\n```{=html}\n