Regularized Reg Using Grad Descent Deteriorates Performance

Hi Everyone.

I do not know why there is not Gradient_Descent function using regularization and so there also might be comparison showing how regularized regression outperforms the one without regularization parameter.

So, I did it on my own, and result is confusing. When I set lambda > 0 then cost start to increase
accordingly with respect to the case where lambda == 0 or without regularization term.

I feel that there is a significant mathematical underlying point thus it is not practiced within the course. Any ideas? Here is my gradient compute function below.

{moderator edit: code removed}

Please don’t post your code on the forum.

Regularization increases the cost on the training set.

But the model you get from training with regularization gives improved results on data that wasn’t in the training set.

This is because regularization helps avoid overfitting the training set.

My apologizes, I did not know it is prohibited to share code.
Regarding your point regularization increases the cost, thank you for this insight I was in between and not sure I have to expect same scale from both regularized and non regularized cost functions.

On the other hand, yes I also expected regularized to outperform on unseen(test) data, but it still didn’t. Here is the SDGRegressor and LinearRegression from sklearn and my two functions without and with regularization term performance. Look forward your response.
Thanks

image

Perhaps you haven’t selected the optimium regularization parameters.