I completed my final assignment for Introduction to TensorFlow but got the incorrect grade.
The assignment requirement states reaching 99.9% accuracy before epoch 15 and I reached 100% accuracy in epoch 14. The autograder says I reached epoch 15 with an accuracy of 89.99%.
I resubmitted the assignment without any changes and now the autograder says I reached an accuracy of 91.25% by epoch 15.
There’s no setup for reproducibility in the notebook. The staff have been informed about this and asked to rely on training history instead of running the model in the grader environment.
Even with enable_op_determinism, if the grader and lab environments are different, we can observe different outcomes. Here’s the text you’ll find useful:
This means that if an op is run multiple times with the same inputs on the same hardware, it will have the exact same outputs each time.
One way to approach such problems is to achieve the desired threshold in fewer than 15 epochs (like 12) and keep an eye on the metrics. You don’t want the model performance to jump around due to circumstances like:
Reached 99.9% accuracy so cancelling training! printed out before reaching 15 epochs.
Things I would check if I had issue like your, what is my batch size as that is related to epoch training as well as units used in model.
My model is as simples as it could be especially related to convolution layers and the dense layers units, the final dense layer unit too.
If you have used higher, I would tweak the units to lower side and re-train!!!
Another issue, check what you recalled callbacks in your train your model statement as it has been already recalled with a class function in one of the previous cell.