The models/model.h5 file may be corrupt. When I load those weights, the final cell in the graded part of the assignment provides the following not-too-accurate result. When I use the weights created by fitting the model developed in the assignment over 10 epochs, the results are much more accurate.
source: 3 May 1979
output: 1970-01-01
source: 5 April 09
output: 2099-00-00
source: 21th of August 2016
output: 2016-08-10
source: Tue 10 Jul 2007
output: 2007-07-01
source: Saturday May 9 2018
output: 2018-09-01
source: March 3 2001
output: 2000-01-00
source: March 3rd 2001
output: 2010-01-01
source: 1 March 2001
output: 2000-01-01
Sorry for resurrecting this old thread, but I also noticed corrupted pre-trained weights. The output when loading the models/model.h5 weights is:
source: 3 May 1979
output: 1111111111
source: 5 April 09
output: 2222222222
source: 21th of August 2016
output: 2222222222
source: Tue 10 Jul 2007
output: 2222222222
source: Saturday May 9 2018
output: 2222222222
source: March 3 2001
output: 2222222222
source: March 3rd 2001
output: 2222222222
source: 1 March 2001
output: 2222222222
The model architecture is correct. In fact, the assignment is given full grades, and if I do not load the pertained weights and instead train it for a few more epochs, I get results that are not perfect, but definitely more sensible.
The “model.h5” file has not been modified since 2021, so I doubt it has been corrupted.
Have you tried restarting the kernel and clearning all the output, then running all the notebook cells again?
The other possibility is that the predictions are incorrect, due to a defect in your one_step_attention() or modelf() functions.
If you suspect that you copy of model.h5 is damaged, you can check its age from the “Files” menu, and if has a recent date, then you’ll have to rename/save your current ipynb file and get a fresh copy of the lab files. Then you can restore your notebook by renaming it again to the default lab name.