Course 4 Week 2 Project 2: Why Training Loss Is Higher than Validation Loss?

In the last exercise in Transfer_learning_with_MobileNet_v1, after fine tuning, I got the accuracy and loss chart as shown below. My question is why training loss is higher than validation loss? Does it mean the model is underfitting the data or something configuration is wrong?

Maybe its underfiting the data or most probably the distributions of training and validation data are not very similar.

Maybe the validation data is too simplistic compared to training data set and hence it takes more time to learn the training data.

Hi, @yaseru2003 it can also be that the hyperparameters are not proper for the model.

My understanding would be that the model did well even if the training was not optimal.

Since the difference is not that big, I would expect a better tunning in the hyperparameters should get rid of these.

But this is just one possibility.