I’m just following the first week of the course “Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization”, and in the first video, Andrew says that if we have a much big dataset, so our test and dev sets could be smaller, breaking with the proportion of 60/20/20. Wouldn’t this make our model trend to underfitting? because the model will train in a very rich dataset but will try to fit to a very poorer dataset
If you have a lot of data, then even a smaller split for the validation and test sets will still have sufficient variance for tuning and evaluating the model.