Do you see regularization the way I do?

Thanks for your post.

The purpose of regularization is to reduce the model complexity by penalising it and so reduce overfitting. So it’s about reducing the model dependency of many parameters, e.g. by:

  • driving weights exactly to zero (L1 regularization) or
  • driving weights close to zero (L2 regularization)

Also dropout is a useful technique to tackle overfitting.

I like these explanation here, too. Feel free to take a look:
-Regularization for Simplicity: L₂ Regularization  |  Machine Learning  |  Google for Developers

Best regards

1 Like