Hi,
I am doing exactly as per C1_W4_Lab_1 notebook but getting weird errors while fitting the model.
Batch size , steps_per_epochs and Epochs are all exactly as suggested.
Only difference is I am using Jupyter Lab instead of Google Colab.
Although it is not stopping my training, the accuracy and loss are severely impacted. I am getting an accuracy of ~0.50 and loss >0.50. While running the same code in Google Colab , it runs perfectly and with a accuracy of >0.95 and loss>0.10. Very strange! I updated PyDataset module too.
From Stack overflow , I tried the following but no success -
Update PyDataset` module - done - no success
Use math.ceil(total_imgs//batch_size) to calculate steps_per_epoch - done - no success
Run with lesser steps - done - works but very less accuracy and more loss.
One strange thing I have observed is that - if I check the actual count of images in the training dataset (for both Horses and Humans) it comes to 1029 while the ImageDataGenerator outputs 1027. See screenshots below.
Is that messing with the steps_per_epoch calculation?
It is normal if your local results and colab results are not the same. But if they have a huge different result, maybe it is due to the package version changes between your local env and colab env. Also try to make sure both versions (like tensorflow version, etc) are the same.
The another fact you to consider is that make sure you won’t run the notebook twice. That could effect the model training.