diff --git a/04_pytorch_custom_datasets.ipynb b/04_pytorch_custom_datasets.ipynb index 1996eb45..8c85a585 100644 --- a/04_pytorch_custom_datasets.ipynb +++ b/04_pytorch_custom_datasets.ipynb @@ -2090,7 +2090,7 @@ " self.classifier = nn.Sequential(\n", " nn.Flatten(),\n", " # Where did this in_features shape come from? \n", - " # It's because each layer of our network compresses and changes the shape of our inputs data.\n", + " # It's because each layer of our network compresses and changes the shape of our input data.\n", " nn.Linear(in_features=hidden_units*16*16,\n", " out_features=output_shape)\n", " )\n", @@ -2361,7 +2361,7 @@ " # 5. Optimizer step\n", " optimizer.step()\n", "\n", - " # Calculate and accumulate accuracy metric across all batches\n", + " # Calculate and accumulate accuracy metrics across all batches\n", " y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1)\n", " train_acc += (y_pred_class == y).sum().item()/len(y_pred)\n", "\n", @@ -2522,7 +2522,7 @@ "\n", "To keep our experiments quick, we'll train our model for **5 epochs** (though you could increase this if you want).\n", "\n", - "As for an **optimizer** and **loss function**, we'll use `torch.nn.CrossEntropyLoss()` (since we're working with multi-class classification data) and `torch.optim.Adam()` with a learning rate of `1e-3` respecitvely.\n", + "As for an **optimizer** and **loss function**, we'll use `torch.nn.CrossEntropyLoss()` (since we're working with multi-class classification data) and `torch.optim.Adam()` with a learning rate of `1e-3` respectively.\n", "\n", "To see how long things take, we'll import Python's [`timeit.default_timer()`](https://docs.python.org/3/library/timeit.html#timeit.default_timer) method to calculate the training time." ] @@ -2772,7 +2772,7 @@ "source": [ "### 8.1 How to deal with overfitting\n", "\n", - "Since the main problem with overfitting is that you're model is fitting the training data *too well*, you'll want to use techniques to \"reign it in\".\n", + "Since the main problem with overfitting is that your model is fitting the training data *too well*, you'll want to use techniques to \"reign it in\".\n", "\n", "A common technique of preventing overfitting is known as [**regularization**](https://ml-cheatsheet.readthedocs.io/en/latest/regularization.html).\n", "\n",