Try and error

I’m on the topic “Bias and variance” and throughout this week the instructor showed how to evaluate and define aspects of the model (degree of the polynomial in a regression, architecture of a neural network, value of the regularization parameter, etc.) based on trial and error. error, evaluating Jcv. Thinking about a neural network with many neurons and a very large training dataset, is this approach also used?

1 Like

Hi @Amanda_dos_Santos great question!

Large neural network and large dataset does add complexity to evaluate models, but this approach remain the same.

Bias and variance are used for the same fundamental principles which is evaluate the model performance

Hyperparameter tuning which is essentially trial and error hold the same approach for large dataset and complex neural network.

The principles of bias, variance, and model evaluation apply to neural networks as they do to simpler models. The scale of the network and the size of the dataset add complexity but do not fundamentally change these principles. The approach of trial and error, or more formally, hyperparameter tuning and regularization, remains essential in developing an effective neural network model.

Overall, all the principles that you learn in these course you will use it when you gain more experience and end up working on larger projects, it might be difference on how you do things but you will apply the same principles, this is one of those things that won’t change in the next 10 years so it is worth to learn properly.

I hope this help!

1 Like

Thank you! The answer was very enlightening!