Hi I was working on lab 1 of this week and I noticed that the loss for the validation set consistently increases as I run through more epochs. I realize this may have already been covered in the previous course or something, but why is this happening specifically to a validation set? Why is the behavior different than that of the training set?
If the loss keeps increasing then your training is not going well i.e. its not fitting the dataset for a variety of reasons, or you dataset splits do not have the same distributions.