*{moderator edit - quiz answers removed}*

why decrease regularization help decrease bias?

*{moderator edit - quiz answers removed}*

why decrease regularization help decrease bias?

If you have an overfitting problem (training accuracy > test/dev accuracy), then that means you have too much variance, right? So you need to decrease variance and increase bias. One way to do that is to add or increase regularization. So we have the relationship:

Increased regularization → increased bias

And it works the other way, too.

But in the example we have here, it’s *not* an overfitting problem, right? It’s an “avoidable bias” problem. So what are useful techniques for solving that? It depends on exactly how you got to the current solution, but if it happened by starting with an overfitting model and then adding regularization to damp it down then maybe the observed performance is telling you that was not the right strategy. Yet anyway. The first steps are to remove the regularization and get back to the problem of how to fit the training data better.

always we have bias increase by decrease variance? and if yes how we have variance decrease with out bias increase?

I’m not sure I understand the question. Bias and variance are roughly the opposite of each other, so increasing one decreases the other essentially by definition. Are you referring to something specific that Prof Ng says in the lectures someplace? If so, please give us the reference (name of the video and time offset).