I ran course 2 week 2 assignemnt for 10 times with different parameters. As Lorence Said, I kept tweaking either data augmentation arguments or layers of Convnets.
One thing I can’t understand during the process is that validation_accuracy is usually higher than training_accuracy in my model, which I think is counter-intuitive. Can you give me an explanation?
Whats the split sir as it is possible that if there are less samples in the validation split then the accuracy is more and loss there is less?
Can you check the split and let me know
Thank you
I remember it 9:1(train vs val)
My train_val_generators returns:
Found 22498 images belonging to 2 classes.
Found 2500 images belonging to 2 classes.
That might be the reason sir as there are very less images to validate and so the distribution of the class is somewhat less
Try increasing the split of validation and see if it increases
1 Like
An inappropriate proportion of training and test data sets can result in such a graph.
Thank you. I didn’t know that.
You can try changing the split and look at the graphs
They should change as far as I know
You are most welcome.