Regularization implementation

prof Andrew in the course teaches how to implement regularization into both normal regression and logistic regression but he did not mention how to do it if you are using libs like sklearn or so and this is the real-world scenario and any one help on how to use it in any library , better if sklearn.

note: when I researched I found they are either scaling all data or using lasso or ridge or another types but not what prof Andrew do is there a way to implement what he says using sklearn or other alternatives

Hello @Youssef_Adel_Fathy,

Choose the sklearn tool that gives you regulariation. This one, for example, and configure the relevant parameters:

This MLS only teaches the L2 regularization, and teaches it in the context of solving it with gradient descent. In sklearn, Ridge, Lasso, and ElasticNet do similar things but not using gradient descent:

name L1 regularization type available use gradient descent?
sklearn.linear_model.SGDRegressor L2 / L1 / ElasticNet Yes
sklearn.linear_model.Lasso L1 No
sklearn.linear_model.Ridge L2 No
sklearn.linear_model.ElasticNet L2 / L1 / ElasticNet No

So, the closest tool to our MLS is sklearn.linear_model.SGDRegressor, but it still has one major difference: MLS teaches batch gradient descent, whereas SGDRegressor uses stochastic gradient descent, so don’t expect them to produce the same result. If you just want more similar results, then Ridge should be the easiest one.

All info above can be found on sklearn’s documentation, so you can make a similar table for logistic regression if you want to, and you may start from here for what’s available.