Hi I was working on lab 1 of this week and I noticed that the loss for the validation set consistently increases as I run through more epochs. I realize this may have already been covered in the previous course or something, but why is this happening specifically to a validation set? Why is the behavior different than that of the training set?
If the loss keeps increasing then your training is not going well i.e. its not fitting the dataset for a variety of reasons, or you dataset splits do not have the same distributions.
1 Like
Hi, @crystal_shin!
Just a little addition to @gent.spah answer, are you getting better metrics in validation despite higher loss? If they’re worse too, that’s a clear case of overfitting. You can try to add some normalization layers or data augmentation.
1 Like