The behavior of loss function figure when regularization is used

Hello
It is said in one of the videos of week 1 that to check the implementation of regularization, we can pay attention to the shape of the loss function. Explicitly, it was mentioned that if it decreases monotonically, it means we have implemented it correctly. What’s the reason for this behavior?

Hey @Erfan_Brv,
Welcome to the community. As you may already know that the purpose of regularization is to prevent overfitting. Now, when a model overfits the training data, the cost function on the training set decreases monotonically, but after a certain point, it starts to increase on the cross-validation set.

So, after implementing regularization, it’s natural to expect that the extent of over-fitting will decrease, i.e., we would expect the cost function to decrease monotonically even on the cross-validation set. And so if it doesn’t happen, it could be an indicator that either we have implemented the regularization incorrectly (perhaps used the wrong definition of the cost function for gradient descent), or we should modify the extent of regularization, in order to reduce overfitting.

I hope this helps.

Regards,
Elemento

Thank you so much Elemento