Skip to content

Commit

Permalink
all typos done
Browse files Browse the repository at this point in the history
  • Loading branch information
pritesh2000 committed Aug 29, 2024
1 parent 26372b2 commit efe3af4
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions 09_pytorch_model_deployment.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3747,7 +3747,7 @@
"\n",
"We'll go from three classes to 101!\n",
"\n",
"From pizza, steak, sushi to pizza, steak, sushi, hot dog, apple pie, carrot cake, chocolate cake, french fires, garlic bread, ramen, nachos, tacos and more!\n",
"From pizza, steak, sushi to pizza, steak, sushi, hot dog, apple pie, carrot cake, chocolate cake, french fries, garlic bread, ramen, nachos, tacos and more!\n",
"\n",
"How?\n",
"\n",
Expand Down Expand Up @@ -3816,7 +3816,7 @@
" \n",
"Nice!\n",
"\n",
"See how just like our EffNetB2 model for FoodVision Mini the base layers are frozen (these are pretrained on ImageNet) and the outer layers (the `classifier` layers) are trainble with an output shape of `[batch_size, 101]` (`101` for 101 classes in Food101). \n",
"See how just like our EffNetB2 model for FoodVision Mini the base layers are frozen (these are pretrained on ImageNet) and the outer layers (the `classifier` layers) are trainable with an output shape of `[batch_size, 101]` (`101` for 101 classes in Food101). \n",
"\n",
"Now since we're going to be dealing with a fair bit more data than usual, how about we add a little data augmentation to our transforms (`effnetb2_transforms`) to augment the training data.\n",
"\n",
Expand Down Expand Up @@ -4371,7 +4371,7 @@
"\n",
"Our FoodVision Big model is capable of classifying 101 classes versus FoodVision Mini's 3 classes, a 33.6x increase!\n",
"\n",
"How does this effect the model size?\n",
"How does this affect the model size?\n",
"\n",
"Let's find out."
]
Expand Down Expand Up @@ -4448,7 +4448,7 @@
"* `app.py` contains our FoodVision Big Gradio app.\n",
"* `class_names.txt` contains all of the class names for FoodVision Big.\n",
"* `examples/` contains example images to use with our Gradio app.\n",
"* `model.py` contains the model defintion as well as any transforms associated with the model.\n",
"* `model.py` contains the model definition as well as any transforms associated with the model.\n",
"* `requirements.txt` contains the dependencies to run our app such as `torch`, `torchvision` and `gradio`."
]
},
Expand Down Expand Up @@ -4521,7 +4521,7 @@
"source": [
"### 11.2 Saving Food101 class names to file (`class_names.txt`)\n",
"\n",
"Because there are so many classes in the Food101 dataset, instead of storing them as a list in our `app.py` file, let's saved them to a `.txt` file and read them in when necessary instead.\n",
"Because there are so many classes in the Food101 dataset, instead of storing them as a list in our `app.py` file, let's save them to a `.txt` file and read them in when necessary instead.\n",
"\n",
"We'll just remind ourselves what they look like first by checking out `food101_class_names`."
]
Expand Down Expand Up @@ -4708,7 +4708,7 @@
"1. **Imports and class names setup** - The `class_names` variable will be a list for all of the Food101 classes rather than pizza, steak, sushi. We can access these via `demos/foodvision_big/class_names.txt`.\n",
"2. **Model and transforms preparation** - The `model` will have `num_classes=101` rather than `num_classes=3`. We'll also be sure to load the weights from `\"09_pretrained_effnetb2_feature_extractor_food101_20_percent.pth\"` (our FoodVision Big model path).\n",
"3. **Predict function** - This will stay the same as FoodVision Mini's `app.py`.\n",
"4. **Gradio app** - The Gradio interace will have different `title`, `description` and `article` parameters to reflect the details of FoodVision Big.\n",
"4. **Gradio app** - The Gradio interface will have different `title`, `description` and `article` parameters to reflect the details of FoodVision Big.\n",
"\n",
"We'll also make sure to save it to `demos/foodvision_big/app.py` using the `%%writefile` magic command."
]
Expand Down Expand Up @@ -4962,7 +4962,7 @@
}
],
"source": [
"# IPython is a library to help work with Python iteractively \n",
"# IPython is a library to help work with Python interactively\n",
"from IPython.display import IFrame\n",
"\n",
"# Embed FoodVision Big Gradio demo as an iFrame\n",
Expand Down Expand Up @@ -5024,7 +5024,7 @@
" * What model architecture does it use?\n",
"6. Write down 1-3 potential failure points of our deployed FoodVision models and what some potential solutions might be.\n",
" * For example, what happens if someone was to upload a photo that wasn't of food to our FoodVision Mini model?\n",
"7. Pick any dataset from [`torchvision.datasets`](https://pytorch.org/vision/stable/datasets.html) and train a feature extractor model on it using a model from [`torchvision.models`](https://pytorch.org/vision/stable/models.html) (you could use one of the model's we've already created, e.g. EffNetB2 or ViT) for 5 epochs and then deploy your model as a Gradio app to Hugging Face Spaces. \n",
"7. Pick any dataset from [`torchvision.datasets`](https://pytorch.org/vision/stable/datasets.html) and train a feature extractor model on it using a model from [`torchvision.models`](https://pytorch.org/vision/stable/models.html) (you could use one of the models we've already created, e.g. EffNetB2 or ViT) for 5 epochs and then deploy your model as a Gradio app to Hugging Face Spaces. \n",
" * You may want to pick smaller dataset/make a smaller split of it so training doesn't take too long.\n",
" * I'd love to see your deployed models! So be sure to share them in Discord or on the [course GitHub Discussions page](https://github.com/mrdbourke/pytorch-deep-learning/discussions)."
]
Expand All @@ -5043,7 +5043,7 @@
" * The [Gradio Blocks API](https://gradio.app/docs/#blocks) for more advanced workflows.\n",
" * The Hugging Face Course chapter on [how to use Gradio with Hugging Face](https://huggingface.co/course/chapter9/1).\n",
"* Edge devices aren't limited to mobile phones, they include small computers like the Raspberry Pi and the PyTorch team have a [fantastic blog post tutorial](https://pytorch.org/tutorials/intermediate/realtime_rpi.html) on deploying a PyTorch model to one.\n",
"* For a fanstastic guide on developing AI and ML-powered applications, see [Google's People + AI Guidebook](https://pair.withgoogle.com/guidebook). One of my favourites is the section on [setting the right expectations](https://pair.withgoogle.com/guidebook/patterns#set-the-right-expectations).\n",
"* For a fantastic guide on developing AI and ML-powered applications, see [Google's People + AI Guidebook](https://pair.withgoogle.com/guidebook). One of my favourites is the section on [setting the right expectations](https://pair.withgoogle.com/guidebook/patterns#set-the-right-expectations).\n",
" * I covered more of these kinds of resources, including guides from Apple, Microsoft and more in the [April 2021 edition of Machine Learning Monthly](https://zerotomastery.io/blog/machine-learning-monthly-april-2021/) (a monthly newsletter I send out with the latest and greatest of the ML field).\n",
"* If you'd like to speed up your model's runtime on CPU, you should be aware of [TorchScript](https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html), [ONNX](https://pytorch.org/docs/stable/onnx.html) (Open Neural Network Exchange) and [OpenVINO](https://docs.openvino.ai/latest/notebooks/102-pytorch-onnx-to-openvino-with-output.html). Going from pure PyTorch to ONNX/OpenVINO models I've seen a ~2x+ increase in performance.\n",
"* For turning models into a deployable and scalable API, see the [TorchServe library](https://pytorch.org/serve/).\n",
Expand Down

0 comments on commit efe3af4

Please sign in to comment.