Hello,

I now finished the section about the evaluation of models and bias/variance and have some questions:

What I took away from the sections were the following steps:

- build the models and evaluate training/cross-validation errors
- adjust model based on bias/variance (e.g. adjust polynomials, amount of data, etc)
- tuning regularisation

(4. Precision/Recall Trade-Off)

Now I wonder - in that labs we always first chose a model based off of our results in step 1 and then compared differently adjusted and regularised models. But could it not be that a worse-performing model would end up outperforming the chosen model after implementing adjustments for bias/variance and regularisation?

Or in other words - would best practice be going through every single step with every single model and **then** deciding on what model to choose? If that is the case - is that feasible? We learned that there are a lot of steps involved in adjusting for high bias and high variance (e.g. accumulating more training data etc) which might take a lot of time. As every model possibly has different problem-areas it would take enormous amounts of time to optimise every single model and then choose the best one - that surely could be the way to go but I want to ascertain myself that this is what top-notch ML practitioners do.

Or is there a different way to approach this? Or is the way the labs approach it the â€ścorrectâ€ť way and it is common practice to first choose a model based on the training/cross-validation errors and only then start to adjust the model on the other parameters?