In the C1_W1_Lab_2_multi-output Lab Exercise, we have a model with 26,000 parameters but a dataset with just 614 training data points. Even if we consider 2 outputs, that is around ~1200 data points. Is it a typical practice to develop these neural networks with so many parameters when we have small data-sets ? Trying to see how typical this practice is as we translate this to other small dataset applications. Thanks
Overfitting in C1_W1_Lab_2_multi-output with small data sets
Course Q&A
TensorFlow: Advanced Techniques Specialization
Custom Models, Layers and Loss Functions with TF
Normally you would need enough data to train a model to achieve good familiarity with the application, and also you need data that has good learning features rather than just a lot of images, just for the sake of large amounts of data.
The purpose of developing a neural networks with many parameters is to extract as many features from the learning data as possible, there is no definitive answer of how big and complex a model should be. This is done by trail and error (of course you would use knowledge from some similar applications as a starting point).
One caveat that you have to look out when building huge models is overfitting. So as long as you model keeps learning the size would be advantageous but if it collapses you got a problem.