From 8d98438ba9d9f9af0e8692d79a3f493776450ac5 Mon Sep 17 00:00:00 2001 From: pritesh2000 Date: Sat, 31 Aug 2024 17:23:17 +0530 Subject: [PATCH] all typos done --- 08_pytorch_paper_replicating.ipynb | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/08_pytorch_paper_replicating.ipynb b/08_pytorch_paper_replicating.ipynb index 61759385..12916ff2 100644 --- a/08_pytorch_paper_replicating.ipynb +++ b/08_pytorch_paper_replicating.ipynb @@ -4103,7 +4103,7 @@ "id": "45a65cda-db08-441c-9f60-cf79138e029d" }, "source": [ - "Then we'll setup device-agonistc code." + "Then we'll setup device-agnostic code." ] }, { @@ -4327,7 +4327,7 @@ "source": [ "Finally, we'll transform our images into tensors and turn the tensors into DataLoaders.\n", "\n", - "Since we're using a pretrained model form `torchvision.models` we can call the `transforms()` method on it to get its required transforms.\n", + "Since we're using a pretrained model from `torchvision.models` we can call the `transforms()` method on it to get its required transforms.\n", "\n", "Remember, if you're going to use a pretrained model, it's generally important to **ensure your own custom data is transformed/formatted in the same way the data the original model was trained on**.\n", "\n", @@ -4372,7 +4372,7 @@ "source": [ "And now we've got transforms ready, we can turn our images into DataLoaders using the `data_setup.create_dataloaders()` method we created in [05. PyTorch Going Modular section 2](https://www.learnpytorch.io/05_pytorch_going_modular/#2-create-datasets-and-dataloaders-data_setuppy).\n", "\n", - "Since we're using a feature extractor model (less trainable parameters), we could increase the batch size to a higher value (if we set it to 1024, we'd be mimicing an improvement found in [*Better plain ViT baselines for ImageNet-1k*](https://arxiv.org/abs/2205.01580), a paper which improves upon the original ViT paper and suggested extra reading). But since we only have ~200 training samples total, we'll stick with 32." + "Since we're using a feature extractor model (less trainable parameters), we could increase the batch size to a higher value (if we set it to 1024, we'd be mimicking an improvement found in [*Better plain ViT baselines for ImageNet-1k*](https://arxiv.org/abs/2205.01580), a paper which improves upon the original ViT paper and suggested extra reading). But since we only have ~200 training samples total, we'll stick with 32." ] }, { @@ -4649,7 +4649,7 @@ "\n", "> **Note:** ^ the EffNetB2 model in reference was trained with 20% of pizza, steak and sushi data (double the amount of images) rather than the ViT feature extractor which was trained with 10% of pizza, steak and sushi data. An exercise would be to train the ViT feature extractor model on the same amount of data and see how much the results improve.\n", "\n", - "The EffNetB2 model is ~11x smaller than the ViT model with similiar results for test loss and accuracy.\n", + "The EffNetB2 model is ~11x smaller than the ViT model with similar results for test loss and accuracy.\n", "\n", "However, the ViT model's results may improve more when trained with the same data (20% pizza, steak and sushi data).\n", "\n",