I believe the Quiz 1 Question 3 is misleading:
Here is my argument - usually we call the scheduler.step() after the training loop was finished (as in the Lab “Scheduler in PyTorch” notebook), so that means that the StepLR reduces the learning rate after every 10 epochs. In other words:
-
Epochs 0–9 → 0.01
-
Epochs 10–19 → 0.001
-
Epochs 20–29 → 0.0001
Hence, at epoch 20 (index 19), the LR is 0.001.
In addition the same ambiguity (or mistake) is in the Lab 1 notebook:
Note that we change and log the learning rate after we trained for that epoch. As a result, the:
```pbar.maybe_log_epoch(epoch=epoch+1, message=f"At epoch {epoch+1}: Training loss: {train_loss:.4f}, Training accuracy: {train_acc:.4f}, LR: {current_lr:.6f}")```
prints the wrong learning rate (the one that we just updated and not the one that was used for the epoch).
So my suggestion would be to fix the Lab code (for both, the StepLR cell and the train_and_evaluate_with_scheduler function) by moving the scheduler.step() parts at the end of the loop (after the logging)), and also correcting the Quiz Question 3 by either phrasing “at epoch index 20” or by changing the correct choice.
Thanks

