{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# For tips on running notebooks in Google Colab, see\n# https://pytorch.org/tutorials/beginner/colab\n%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Transfer Learning for Computer Vision Tutorial\n==============================================\n\n**Author**: [Sasank Chilamkurthy](https://chsasank.github.io)\n\nIn this tutorial, you will learn how to train a convolutional neural\nnetwork for image classification using transfer learning. You can read\nmore about the transfer learning at [cs231n\nnotes](https://cs231n.github.io/transfer-learning/)\n\nQuoting these notes,\n\n> In practice, very few people train an entire Convolutional Network\n> from scratch (with random initialization), because it is relatively\n> rare to have a dataset of sufficient size. Instead, it is common to\n> pretrain a ConvNet on a very large dataset (e.g. ImageNet, which\n> contains 1.2 million images with 1000 categories), and then use the\n> ConvNet either as an initialization or a fixed feature extractor for\n> the task of interest.\n\nThese two major transfer learning scenarios look as follows:\n\n- **Finetuning the ConvNet**: Instead of random initialization, we\n initialize the network with a pretrained network, like the one that\n is trained on imagenet 1000 dataset. Rest of the training looks as\n usual.\n- **ConvNet as fixed feature extractor**: Here, we will freeze the\n weights for all of the network except that of the final fully\n connected layer. This last fully connected layer is replaced with a\n new one with random weights and only this layer is trained.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# License: BSD\n# Author: Sasank Chilamkurthy\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nfrom torch.optim import lr_scheduler\nimport torch.backends.cudnn as cudnn\nimport numpy as np\nimport torchvision\nfrom torchvision import datasets, models, transforms\nimport matplotlib.pyplot as plt\nimport time\nimport os\nfrom PIL import Image\nfrom tempfile import TemporaryDirectory\n\ncudnn.benchmark = True\nplt.ion() # interactive mode" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Load Data\n=========\n\nWe will use torchvision and torch.utils.data packages for loading the\ndata.\n\nThe problem we\\'re going to solve today is to train a model to classify\n**ants** and **bees**. We have about 120 training images each for ants\nand bees. There are 75 validation images for each class. Usually, this\nis a very small dataset to generalize upon, if trained from scratch.\nSince we are using transfer learning, we should be able to generalize\nreasonably well.\n\nThis dataset is a very small subset of imagenet.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "# Data augmentation and normalization for training\n# Just normalization for validation\ndata_transforms = {\n 'train': transforms.Compose([\n transforms.RandomResizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n ]),\n 'val': transforms.Compose([\n transforms.Resize(256),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n ]),\n}\n\ndata_dir = 'data/hymenoptera_data'\nimage_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),\n data_transforms[x])\n for x in ['train', 'val']}\ndataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,\n shuffle=True, num_workers=4)\n for x in ['train', 'val']}\ndataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}\nclass_names = image_datasets['train'].classes\n\n# We want to be able to train our model on an `accelerator `__\n# such as CUDA, MPS, MTIA, or XPU. If the current accelerator is available, we will use it. Otherwise, we use the CPU.\n\ndevice = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else \"cpu\"\nprint(f\"Using {device} device\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualize a few images\n======================\n\nLet\\'s visualize a few training images so as to understand the data\naugmentations.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def imshow(inp, title=None):\n \"\"\"Display image for Tensor.\"\"\"\n inp = inp.numpy().transpose((1, 2, 0))\n mean = np.array([0.485, 0.456, 0.406])\n std = np.array([0.229, 0.224, 0.225])\n inp = std * inp + mean\n inp = np.clip(inp, 0, 1)\n plt.imshow(inp)\n if title is not None:\n plt.title(title)\n plt.pause(0.001) # pause a bit so that plots are updated\n\n\n# Get a batch of training data\ninputs, classes = next(iter(dataloaders['train']))\n\n# Make a grid from batch\nout = torchvision.utils.make_grid(inputs)\n\nimshow(out, title=[class_names[x] for x in classes])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Training the model\n==================\n\nNow, let\\'s write a general function to train a model. Here, we will\nillustrate:\n\n- Scheduling the learning rate\n- Saving the best model\n\nIn the following, parameter `scheduler` is an LR scheduler object from\n`torch.optim.lr_scheduler`.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def train_model(model, criterion, optimizer, scheduler, num_epochs=25):\n since = time.time()\n\n # Create a temporary directory to save training checkpoints\n with TemporaryDirectory() as tempdir:\n best_model_params_path = os.path.join(tempdir, 'best_model_params.pt')\n \n torch.save(model.state_dict(), best_model_params_path)\n best_acc = 0.0\n\n for epoch in range(num_epochs):\n print(f'Epoch {epoch}/{num_epochs - 1}')\n print('-' * 10)\n\n # Each epoch has a training and validation phase\n for phase in ['train', 'val']:\n if phase == 'train':\n model.train() # Set model to training mode\n else:\n model.eval() # Set model to evaluate mode\n\n running_loss = 0.0\n running_corrects = 0\n\n # Iterate over data.\n for inputs, labels in dataloaders[phase]:\n inputs = inputs.to(device)\n labels = labels.to(device)\n\n # zero the parameter gradients\n optimizer.zero_grad()\n\n # forward\n # track history if only in train\n with torch.set_grad_enabled(phase == 'train'):\n outputs = model(inputs)\n _, preds = torch.max(outputs, 1)\n loss = criterion(outputs, labels)\n\n # backward + optimize only if in training phase\n if phase == 'train':\n loss.backward()\n optimizer.step()\n\n # statistics\n running_loss += loss.item() * inputs.size(0)\n running_corrects += torch.sum(preds == labels.data)\n if phase == 'train':\n scheduler.step()\n\n epoch_loss = running_loss / dataset_sizes[phase]\n epoch_acc = running_corrects.double() / dataset_sizes[phase]\n\n print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')\n\n # deep copy the model\n if phase == 'val' and epoch_acc > best_acc:\n best_acc = epoch_acc\n torch.save(model.state_dict(), best_model_params_path)\n\n print()\n\n time_elapsed = time.time() - since\n print(f'Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s')\n print(f'Best val Acc: {best_acc:4f}')\n\n # load best model weights\n model.load_state_dict(torch.load(best_model_params_path, weights_only=True))\n return model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Visualizing the model predictions\n=================================\n\nGeneric function to display predictions for a few images\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def visualize_model(model, num_images=6):\n was_training = model.training\n model.eval()\n images_so_far = 0\n fig = plt.figure()\n\n with torch.no_grad():\n for i, (inputs, labels) in enumerate(dataloaders['val']):\n inputs = inputs.to(device)\n labels = labels.to(device)\n\n outputs = model(inputs)\n _, preds = torch.max(outputs, 1)\n\n for j in range(inputs.size()[0]):\n images_so_far += 1\n ax = plt.subplot(num_images//2, 2, images_so_far)\n ax.axis('off')\n ax.set_title(f'predicted: {class_names[preds[j]]}')\n imshow(inputs.cpu().data[j])\n\n if images_so_far == num_images:\n model.train(mode=was_training)\n return\n model.train(mode=was_training)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finetuning the ConvNet\n======================\n\nLoad a pretrained model and reset final fully connected layer.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_ft = models.resnet18(weights='IMAGENET1K_V1')\nnum_ftrs = model_ft.fc.in_features\n# Here the size of each output sample is set to 2.\n# Alternatively, it can be generalized to ``nn.Linear(num_ftrs, len(class_names))``.\nmodel_ft.fc = nn.Linear(num_ftrs, 2)\n\nmodel_ft = model_ft.to(device)\n\ncriterion = nn.CrossEntropyLoss()\n\n# Observe that all parameters are being optimized\noptimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)\n\n# Decay LR by a factor of 0.1 every 7 epochs\nexp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Train and evaluate\n==================\n\nIt should take around 15-25 min on CPU. On GPU though, it takes less\nthan a minute.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,\n num_epochs=25)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "visualize_model(model_ft)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ConvNet as fixed feature extractor\n==================================\n\nHere, we need to freeze all the network except the final layer. We need\nto set `requires_grad = False` to freeze the parameters so that the\ngradients are not computed in `backward()`.\n\nYou can read more about this in the documentation\n[here](https://pytorch.org/docs/notes/autograd.html#excluding-subgraphs-from-backward).\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_conv = torchvision.models.resnet18(weights='IMAGENET1K_V1')\nfor param in model_conv.parameters():\n param.requires_grad = False\n\n# Parameters of newly constructed modules have requires_grad=True by default\nnum_ftrs = model_conv.fc.in_features\nmodel_conv.fc = nn.Linear(num_ftrs, 2)\n\nmodel_conv = model_conv.to(device)\n\ncriterion = nn.CrossEntropyLoss()\n\n# Observe that only parameters of final layer are being optimized as\n# opposed to before.\noptimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)\n\n# Decay LR by a factor of 0.1 every 7 epochs\nexp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Train and evaluate\n==================\n\nOn CPU this will take about half the time compared to previous scenario.\nThis is expected as gradients don\\'t need to be computed for most of the\nnetwork. However, forward does need to be computed.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "model_conv = train_model(model_conv, criterion, optimizer_conv,\n exp_lr_scheduler, num_epochs=25)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "visualize_model(model_conv)\n\nplt.ioff()\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Inference on custom images\n==========================\n\nUse the trained model to make predictions on custom images and visualize\nthe predicted class labels along with the images.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "def visualize_model_predictions(model,img_path):\n was_training = model.training\n model.eval()\n\n img = Image.open(img_path)\n img = data_transforms['val'](img)\n img = img.unsqueeze(0)\n img = img.to(device)\n\n with torch.no_grad():\n outputs = model(img)\n _, preds = torch.max(outputs, 1)\n\n ax = plt.subplot(2,2,1)\n ax.axis('off')\n ax.set_title(f'Predicted: {class_names[preds[0]]}')\n imshow(img.cpu().data[0])\n \n model.train(mode=was_training)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "visualize_model_predictions(\n model_conv,\n img_path='data/hymenoptera_data/val/bees/72100438_73de9f17af.jpg'\n)\n\nplt.ioff()\nplt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Further Learning\n================\n\nIf you would like to learn more about the applications of transfer\nlearning, checkout our [Quantized Transfer Learning for Computer Vision\nTutorial](https://pytorch.org/tutorials/intermediate/quantized_transfer_learning_tutorial.html).\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 0 }