Using L2 Regularization when overfitting issue is minor

In the video of Course 2 week 1, Andrew says “if there is no overfitting issue, we don’t usually bother to use drop-out.” With that said, does it imply that we can use L2 regularization even when there is no or a minor overfitting issue?

I believe Prof Andrew Ng means that you train your model without any regularization and compare train vs validation performance to understand if you under or overtfit during training. If you overfit, you can add regularization techniques such as L1, L2, Dropout, Batch normalizaiton etc.

TLDR: Occam’s razor.

2 Likes