diff --git a/09_pytorch_model_deployment.ipynb b/09_pytorch_model_deployment.ipynb index 0402b799..268b8f25 100644 --- a/09_pytorch_model_deployment.ipynb +++ b/09_pytorch_model_deployment.ipynb @@ -3637,8 +3637,8 @@ "4. Creating an environment (`python3 -m venv env`).\n", "5. Activating the environment (`source env/bin/activate`).\n", "5. Installing the requirements (`pip install -r requirements.txt`, the \"`-r`\" is for recursive).\n", - " * **Note:** If you're facing errors, you may need to upgrade `pip` first: `pip install --upgrade pip`\n", - "6. Run the app (`python app.py`).\n", + " * **Note:** This step may take 5-10 minutes depending on your internet connection. And if you're facing errors, you may need to upgrade `pip` first: `pip install --upgrade pip`.\n", + "6. Run the app (`python3 app.py`).\n", "\n", "This should result in a Gradio demo just like the one we built above running locally on your machine at a URL such as `http://127.0.0.1:7860/`.\n", "\n", @@ -5004,16 +5004,16 @@ "\n", "You should be able to complete them by referencing each section or by following the resource(s) linked.\n", "\n", - "**TK Resources:**\n", + "**TODO Resources:**\n", "\n", - "* [TK Exercise template notebook for 08].\n", + "* [Exercise template notebook for 09](https://github.com/mrdbourke/pytorch-deep-learning/blob/main/extras/exercises/09_pytorch_model_deployment_exercises.ipynb).\n", "* [TK Example solutions notebook for 08] try the exercises *before* looking at this.\n", " * See a live [TK video walkthrough of the solutions on YouTube] (errors and all).\n", "\n", - "1. Make and time predictions with both models on the test dataset using the GPU (`device=\"cuda\"`). Compare the model's prediction times on GPU vs CPU - does this close the gap between them? As in, does making predictions on the GPU make the ViT feature extractor prediction times closer to the EffNetB2 prediction times?\n", - " * You'll find code to do these steps in section 5. Making predictions with our trained models and timing them and section 6. Comparing model results, prediction times and size.\n", - "2. The ViT feature extractor seems to have more learning capacity (due to more parameters) than EffNetB2, how does it go on the larger 20% split of the Food101 dataset?\n", - " * Train a ViT feature extractor on the 20% Food101 dataset for 5 epochs, just like we did with EffNetB2 in section 10. Creating FoodVision Big.\n", + "1. Make and time predictions with both feature extractor models on the test dataset using the GPU (`device=\"cuda\"`). Compare the model's prediction times on GPU vs CPU - does this close the gap between them? As in, does making predictions on the GPU make the ViT feature extractor prediction times closer to the EffNetB2 feature extractor prediction times?\n", + " * You'll find code to do these steps in [section 5. Making predictions with our trained models and timing them](https://www.learnpytorch.io/09_pytorch_model_deployment/#5-making-predictions-with-our-trained-models-and-timing-them) and [section 6. Comparing model results, prediction times and size](https://www.learnpytorch.io/09_pytorch_model_deployment/#6-comparing-model-results-prediction-times-and-size).\n", + "2. The ViT feature extractor seems to have more learning capacity (due to more parameters) than EffNetB2, how does it go on the larger 20% split of the entire Food101 dataset?\n", + " * Train a ViT feature extractor on the 20% Food101 dataset for 5 epochs, just like we did with EffNetB2 in section [10. Creating FoodVision Big](https://www.learnpytorch.io/09_pytorch_model_deployment/#10-creating-foodvision-big).\n", "3. Make predictions across the 20% Food101 test dataset with the ViT feature extractor from exercise 2 and find the \"most wrong\" predictions.\n", " * The predictions will be the ones with the highest prediction probability but with the wrong predicted label.\n", " * Write a sentence or two about why you think the model got these predictions wrong.\n",