The inline text for this lab states:
The training should have stopped after less than 10 epochs and it should have reached an accuracy over 99,9% (firing the callback)."
However, this did not occur for me despite all of my upstream code blocks seeming to pass with perfect scores. My Model hit 99.3% accuracy on the training set after the 10th epoch (and validation accuracy hit 100.0% with 0 loss after the 4th epoch) so the callback wasn’t triggering.
The training accuracy bounced around between 98% and 99.8% from the 10th epoch onward until it finally (randomly?) hit 100% on the 42nd epoch. The 42nd epoch is much later than the 10th epoch indicated in the inline text. Is this expected behavior? Does the discrepancy indicate a hidden bug in my code somewhere?
Update: I completed the lab and received a score of 100/100 so I am inclined to believe there is an error in the lab text or code. Either the text should have indicated a greater number of epochs, or the code should have set the callback to trigger based on validation accuracy not training accuracy.