Test set, Bias and Variance

Hello,

After watching the lectures of week1 I think that the difference between the human level error and the training error is called “bias” or “Avoidable bias”.
and the difference between the training and dev set is called “variance”.
Please correct me if the above is wrong.
and in case it is right please explain what the test set error has to do with bias and variance. if any?

Your understanding of avoidable bias and variance is correct.

When you have 3 splits:

  1. Use train set to train the model for each hyperparameter configuration.
  2. Use dev set to tune model hyperparameters i.e. pick the best performing model on this dataset.
  3. Once the best performing model is selected, use test set to report model performance. In other words, test set is meant for reporting purposes.

When there 2 splits, dev and test sets are the same. Use them for model selection and reporting purpose.

Thanks for your reply.
After reading your answer I think that when I have three splits, the test set has no connection to the bias or the variance.
and if the above statement is correct, does the same applies if the metric I am using is accuracy, not error? please explain.
thanks in advance

Accuracy is just 1 - error, right? So all the same ideas apply. A gap in accuracy or a gap in error is either variance or bias depending on which differences you are talking about.

There can be differing degrees of bias between the training and dev sets and then between the dev set and the test set. Each case may mean something different in terms of that steps you need to take to remedy the given case.

This is a large and complex subject and it’s one of the major topics of Course 3. Prof Ng keeps coming back to it and the ideas get more complex as you go through the course. My suggestion would be to “hold that thought” and just continue and listen to all that Prof Ng has to say in Week 2 of Course 3. It also might be worth listening to the relevant lectures in Week 1 again with all that we’ve discussed on this thread in mind. The ideas might come together for you in a better way going through again after having thought about it as we’ve been doing here.

Dev and test sets come from the same distribution. When dev error is far lesser than test set, you are overfitting on the dev set. Use a larger dev set to get around this.

Thanks for the replies,
I will be watching the week 2 lectures and then try to bring everything together again. that’s on one hand.
on the other, I do think that a big difference between the dev error and test error means the dev is overfitting but not sure why it does not mean underfitting for the test instead.
even after knowing that they are from the same distribution.
so I will be looking for an explanation for that after better understanding bias and variance thank you for mentioning that.

Everything is relative to something else, right? If the test error is noticeably higher than the dev error, then that means the model has more bias on the test data than the dev data. But whether the dev data is overfitting or underfitting depends on where it is in relation to accuracy or error on the training data, right? Prof Ng does discuss all this in quite a bit of detail in the lectures. There’s one point at which he shows a big chart with all the different datasets and labels all the gaps between them. In my notes, it looks like it is in the lecture in Week 2 where he talks about the case in which the Training set is from a different distribution than the dev and test sets. It’s where he introduces the concept of the “training dev” set. Please stay tuned for that.

Thanks for the note I will do my best.