Hi there,
I believe the way of implementing the regularization term in scikit-learn algorithms is by both the ‘penalty’ and ‘solver’ parameters. However, is it possible to establish a different weight for lambda?
In other terms, how could I manage overfitted models so that they perform better for test datasets in scikit-learn?
Thanks in advance.
Hey @FabianoMC,
Welcome to the community. I am assuming we are talking about the sklearn’s implementation of Logistic Regression, since sklearn’s implementation of Linear Regression, doesn’t offer any support for using regularization.
Now, assuming my assumption is true, in sklearn’s implementation of Logistic Regression, we can implement regularization with the help of the penalty
parameter. Using this parameter, you can use L1
, L2
, as well as both of them simultaneously, i.e., elasticnet
. Now, coming to the solver
parameter, it decides the algorithm to use in the optimization problem (i.e., finding optimal weights and bias), and it doesn’t decide the regularization, at least directly. However, some of these algorithms are only compatible with a subset of regularizations as offered by sklearn, and hence, you have to take care of that. It is mentioned clearly in sklearn which solver
supports what all type of regularizations.
And I was writing the rest of the reply, and the post got vanished in the air
Anyways, I hope it helps.
Cheers,
Elemento
Sorry @Elemento , I realized I had posted in the wrong week, but anyways, thanks for the reply and explanation.
Also, I found another post of you guys talking about the parameter ‘C’ and got it now.
See you.