Hello, the main goal of regularization is to avoid overfitting and reduce the complexity of the model by shrinking the coefficients. But from the slides, weights are so reduced by a small amount after each iteration. I would like to ask if regularization also helps the cost function converges faster? Thanks.

Regularization helps prevent overfiting by not placing the optimization function in certain minima that is good for training data but not good enough for validation. Also regularization decreases the contribution of some weights so the process of convergence will in general take longer but could place the optimization function at better minima.

Hello @hyhung1234, the addition of the regularization parameter makes the cost curve for linear regression *steeper*, in other words, the gradients are higher, so yes, I think it makes the cost function converge faster, * but whether it’s converging to a good model depends on the choice of your regularization parameter*. Moreover, it’s not designed for boosting converging speed, as Gent pointed out, so you may end up getting a good regularization parameter by a good parameter selection approach (definitely not by considering speed ) while finding the speed isn’t very much different.

@hyhung1234 On the other hand, for the weights to converge better, you might want to decrease your learning rate because, with regularization parameter, the gradient is larger so you want your update steps to be finer as it approaches the minimum point. As a result, reducing the learning rate takes more of your time to train.