Course 2, Week 1, Exercise Regularization

I finished the Regularization exercise, but as the red-underline mentioned in the image below, it is also a “trade-off” between the accuracy in training set and the dev set/training set. That is when the varience reduced and at the mean time, the bias goes up.
But as Ng taught in his online courses, only some of the old method could lead to a trade-off situation and we don’t need to consider that “trade-off” in such modern method.
So I am confused that is there an method exist like Ng said we no longer need to consider the “trade-off” between the accuracy in training-set and the dev-set/training-set?

Hi, @AdamWang.

Sorry for the late reply.

It’s not that you no longer have to consider the trade-off, but rather that you have tools at your disposal to improve one without hurting the other much.

For example, if you regularize properly, you can train a bigger network to reduce bias without increasing variance much. And if you can get more data, you can reduce variance without affecting bias much.

Does that make sense? :slight_smile:

Hello, @nramon .
Thank you for your reply. Maybe i think i got it. That is, by regularize, the test accuracy improved (the variance reduced) and the train accuracy (bias) doesn’t reduce too much.
And if i want to reduce bias, i should design a deeper networks.:slight_smile:

Adam

Be careful. If you only regularize you should expect training set performance to be hurt, which is what the notebook says.

This lecture specifically discusses the two approaches I mentioned to just reduce bias or just reduce variance without hurting the other much.

Let me know if my explanation was not clear :slight_smile:

Thank you verrry much to point me out that video and i thought i really got what it is this time after I rewatched that video (and i remember i watched that video first time when i never learn regulazation).

That is, in traditional machine learning methods (maybe decades years ago) without deep learning, we must come across the “trade-off” porblem because every time we reduce the variance or the bias, the other things will increase. On the contrary, in deep learning, if we have a high bias, build a deep networks could be affective in most cases and if we have high variance, a bigger dataset and regularization methods could be helpful. (As Ng said in the slide below)

That is, if we have a well regularized networks, training a bigger network almost never hurts.:astonished: