C1_W3_Lab09_Regularization_Soln regularization term effect


I’m struggling to grasp how introducing a regularization term into the cost function balances the trade-off between bias and variance. While I see that examining a handful of training and test datasets can demonstrate this effect, I’m puzzled about its applicability to all datasets. Is there a mathematical rationale or foundational concept that explains why this formula effectively addresses this balance?

Regularization is a tool that is used to prevent overfitting the training set.

  • Not enough regularization causes high variance (overfitting).
  • Too much regularization causes high bias (underfitting).

Regularization works by adding an additional cost value based on the sum of the squares of the weight values.

Since we are still going to try to minimize the cost, this extra cost value creates an incentive to learn smaller weight values.

It is still your job to adjust the lambda parameter to achieve the best balance between bias and variance.

Thanks for the explanation, what remains unclear to me is the mechanism by which the regularization parameter selectively exerts a significant influence on certain model coefficients, nearly reducing them to zero, while it impacts others only minimally

It doesn’t. It reduces the magnitude of all weights, based on the sum of their squared values. So large magnitude weights will be reduced more in relation to those with lower magnitudes.

1 Like