From 21c059d6508faf1a3285fb8b006d4fe00c74365f Mon Sep 17 00:00:00 2001 From: pritesh2000 Date: Tue, 3 Sep 2024 19:29:55 +0530 Subject: [PATCH] dependant, visualing -typos --- 03_pytorch_computer_vision.ipynb | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/03_pytorch_computer_vision.ipynb b/03_pytorch_computer_vision.ipynb index 204eeb97..bbf5a08f 100644 --- a/03_pytorch_computer_vision.ipynb +++ b/03_pytorch_computer_vision.ipynb @@ -1976,7 +1976,7 @@ ">\n", "> But for larger datasets and models, the speed of computing the GPU can offer usually far outweighs the cost of getting the data there.\n", ">\n", - "> However, this is largely dependant on the hardware you're using. With practice, you will get used to where the best place to train your models is. \n", + "> However, this is largely dependent on the hardware you're using. With practice, you will get used to where the best place to train your models is. \n", "\n", "Let's evaluate our trained `model_1` using our `eval_model()` function and see how it went." ] @@ -2220,7 +2220,7 @@ "| Structured data (Excel spreadsheets, row and column data) | Gradient boosted models, Random Forests, XGBoost | [`sklearn.ensemble`](https://scikit-learn.org/stable/modules/classes.html#module-sklearn.ensemble), [XGBoost library](https://xgboost.readthedocs.io/en/stable/) |\n", "| Unstructured data (images, audio, language) | Convolutional Neural Networks, Transformers | [`torchvision.models`](https://pytorch.org/vision/stable/models.html), [HuggingFace Transformers](https://huggingface.co/docs/transformers/index) | \n", "\n", - "> **Note:** The table above is only for reference, the model you end up using will be highly dependant on the problem you're working on and the constraints you have (amount of data, latency requirements).\n", + "> **Note:** The table above is only for reference, the model you end up using will be highly dependent on the problem you're working on and the constraints you have (amount of data, latency requirements).\n", "\n", "Enough talking about models, let's now build a CNN that replicates the model on the [CNN Explainer website](https://poloclub.github.io/cnn-explainer/).\n", "\n", @@ -3682,7 +3682,7 @@ "\n", "However, this performance increase often comes at a sacrifice of training speed and inference speed.\n", "\n", - "> **Note:** The training times you get will be very dependant on the hardware you use. \n", + "> **Note:** The training times you get will be very dependent on the hardware you use. \n", ">\n", "> Generally, the more CPU cores you have, the faster your models will train on CPU. And similar for GPUs.\n", "> \n", @@ -4282,7 +4282,7 @@ "\n", "We can use this kind of information to further inspect our models and data to see how it could be improved.\n", "\n", - "> **Exercise:** Use the trained `model_2` to make predictions on the test FashionMNIST dataset. Then plot some predictions where the model was wrong alongside what the label of the image should've been. After visualing these predictions do you think it's more of a modelling error or a data error? As in, could the model do better or are the labels of the data too close to each other (e.g. a \"Shirt\" label is too close to \"T-shirt/top\")?" + "> **Exercise:** Use the trained `model_2` to make predictions on the test FashionMNIST dataset. Then plot some predictions where the model was wrong alongside what the label of the image should've been. After visualizing these predictions do you think it's more of a modelling error or a data error? As in, could the model do better or are the labels of the data too close to each other (e.g. a \"Shirt\" label is too close to \"T-shirt/top\")?" ] }, { @@ -4544,7 +4544,7 @@ "12. Create a random tensor of shape `[1, 3, 64, 64]` and pass it through a `nn.Conv2d()` layer with various hyperparameter settings (these can be any settings you choose), what do you notice if the `kernel_size` parameter goes up and down?\n", "13. Use a model similar to the trained `model_2` from this notebook to make predictions on the test [`torchvision.datasets.FashionMNIST`](https://pytorch.org/vision/main/generated/torchvision.datasets.FashionMNIST.html) dataset. \n", " * Then plot some predictions where the model was wrong alongside what the label of the image should've been. \n", - " * After visualing these predictions do you think it's more of a modelling error or a data error? \n", + " * After visualizing these predictions do you think it's more of a modelling error or a data error? \n", " * As in, could the model do better or are the labels of the data too close to each other (e.g. a \"Shirt\" label is too close to \"T-shirt/top\")?\n", "\n", "## Extra-curriculum\n", @@ -4553,6 +4553,12 @@ "* Lookup \"most common convolutional neural networks\", what architectures do you find? Are any of them contained within the [`torchvision.models`](https://pytorch.org/vision/stable/models.html) library? What do you think you could do with these?\n", "* For a large number of pretrained PyTorch computer vision models as well as many different extensions to PyTorch's computer vision functionalities check out the [PyTorch Image Models library `timm`](https://github.com/rwightman/pytorch-image-models/) (Torch Image Models) by Ross Wightman." ] + }, + { + "cell_type": "markdown", + "id": "3690b822", + "metadata": {}, + "source": [] } ], "metadata": {