How regularization decrease generalization error

We use regularization in machine learning to reduce the generalization gap between train and test dataset. So we use different regularization technique to reduce our training accuracy so does it really increasing the test accuracy and if so then can anyone explain the logic behind this ??

Hello @NightWing
Generalization error occurs because the of the large parameters(weights) of the model or the complexity of the model is very high. This leads to overfitting. Overfitting means machine learning model learns the training data too well, capturing not only the underlying patterns but also the noise, outliers, and random fluctuations present in the training set. So model’s training accuracy will be very high. When it is tested with unseen data(test data), it may not produce the expected accuracy.
Using regularization term, we are penalizing the weights, so that the model may not learn too much(it is less likely to memorize noise, outliers etc.,) . Thus it may give better accuracy on testing data also.

3 Likes

If overfitting the training set is on one end of the performance scale, and overfitting the test set is on the other, then the purpose of regularization is for both to meet somewhere in the middle.