Regularization Intution

Hello everyone,
Wouldn’t the weights decrease more in regularized models, when compared to a unregularized model? Yet we get such a beautifully fitted model. Shouldn’t it fit worse, or being less of over(like a bit under(in terms of y-coordinates) the best line)? How are we compensating the loss of values of weights in right functions? Is learning rate pushing it through?
This doubt has been on my mind since I have started, but have yet to gain a satisfactory answer.
Please help if you do, or comment if my doubt seems to be abstract.

First, please understand that small weights don’t mean a good fit.
Gradient Descent tries to give us a “perfect fit” of the data by decreasing the cost but it may lead to overfitting. Regularization term tries to increase the cost (by increasing or decreasing weights and bias). This combination gives us a “Just good” fit.