Validation Accuracy and Lost better than Training Acc/Loss

My model keeps producing a situation in which my validation accuracy/loss is preforming better than the training accuracy/loss respectively. (The last epoch, the val_loss went up, but was still better than training loss overall)

I’m taking this as it means that the model would generalize well, but should I be concerned? I augmented the training data a little, and put the dropout for the model at .4, so it’s possible that the training data is just more complex than the validation set, but not sure if this also just means there might be a bug in my code or something.

2 Likes

Okay, I switched the horizontal to false because none of the dataset images had the opposite hand used. And that significantly improved the performance metrics for the training set. The entire time, however, the validation set metrics are better, is this unusual?

2 Likes

When using data augmentation techniques and dropout, training accuracy is likely to have a bumpy curve.

When validation data is a lot easier to classify when compared to training data, accuracy on validation data will likely be higher than on training data.

4 Likes