Regularization: Intuition and Conservation of influence

Hi there,

I think the best way intuitive-wise is to take a look at your loss function and the components it consists of. Ask yourself how large is lambda, so how „important“ is regularization in comparison with your performance goal:

The result afterwards is the output of your optimization problem on this very loss function. Independent of this: feature ranking might be a nice tool for you if you are interested in evaluating the importance of features: Permutation Importance vs Random Forest Feature Importance (MDI) — scikit-learn 1.3.2 documentation

This thread might be worth a look, too: