Bias, variance diagnostic

we saw that in order to evaluate a model we use the test error not the validation error because this one is an optimistic estimation for the generalization error. Yet, in the bias, variance diagnostic we are using the j_train and the j_cv, why don’t we use the j_train and the j_test insteed?

Hi @Bilel_Djemel

as we use validation (J_CV) to compute and set hyper parameters of overfitting and underfitting for example like lambda , degree of input(polynomial degree) so when we reach to the hyper parameters that have a small (J_CV) ,so we have a some confidence that when we use parameters that had learnt from model and hyper parameter that we set by calculate (J_CV) so we give the test set to check the global parameters the model if we deploy it …so for summary
train set use to set parameters of model
validation set use to set hyper parameters
test set use to check the result of all parameters that we have set from (train & validation ) set to doing deployment

please feel free to ask any questions,

Hi. When we try to optimize the parameters usually involves several iterations, this is how it could looks like:

  • Train the model (Train data)
  • Evaluate the performance of your model (Validation data)
  • Optimize the parameters based on the results of the validation data

This is an iterative process that you keep going until finding a good model that fits to the validation data, however as you are optimizing there are chances to overfit to your validation data as well, here is when you use your test data.

  • Evaluate on the test data

If the results are similar to the validation data - Good news, you have a great model
If the results are worse than your validation data, you overfit and need to start over

So, we save the test set to have the opportunity for one more test at the end of our optimization and optimize on the validation data.

Hope this helps.