My model keeps producing a situation in which my validation accuracy/loss is preforming better than the training accuracy/loss respectively. (The last epoch, the val_loss went up, but was still better than training loss overall)
I’m taking this as it means that the model would generalize well, but should I be concerned? I augmented the training data a little, and put the dropout for the model at .4, so it’s possible that the training data is just more complex than the validation set, but not sure if this also just means there might be a bug in my code or something.