i am not setting training=TRUE anywhere. I suspect this may be in the training /test / assertion libraries. Also, on the shortcut path i use X_shortcut , so i have…(not all the code is displayed). Does this look wrong?
this is the issue, set the training to true when you do batchnormalisation in every component as well as the shortcut path. The instructions given below the exercise header mentions you to use training you set as true indirectly when it mentions if you set training to false weight will not get updated.
Also make sure you have used x_shortcut in the conv2d layer too.
It’s possible that there are different versions of this course still active, but in the current version that I see we do not set training = True anywhere: it is handled in the test code.
The other important thing about the current version is that they supply their own import for the BatchNormalization function from the local utility file. It is a mistake to use the standard TF version of BatchNormalization.
But if you have gotten full credit from the grader, then your code is consistent with the version you are using, so maybe this is a red herring.
the version I had both mentioned otherwise but as I am not mentor for this course, I don’t have access to its latest version. But I am wondering how @Cormac_Garvey issue was still related to the same if you are mentioning labs were updated. probably l.t. needs to be notified of this, to avoid confusion.
what I remember the training part was removed for sequence model course assignment after course update. But I don’t remember for CNN.
Thank for the update though.
@Mubsi can you check if the test cell still needs any update based on the current version as paul mentions the training doesn’t need to be set to true.
Hi, i can confirm that for the 2nd graded exercise i used BatchNormalization(axis=3)(X,training=True) for the 3 components. I used a similar approach with the shortcut path (but used X_shortcut)
To close the loop on the public thread, Cormac is using the current version of the notebook. But setting training = True was done in the code at the same time as some other fix must have been done and training = True is a red herring. The current code will still pass the tests if you remove the setting of training = True on the BatchNorm calls.