Deeper model VS more neurons

Hello, there!

As I said in the other thread to your follow up query,

"If your model learns upon a less complex dataset, that would always be great.

Practically, a less complex dataset has a fewer dimension/features and thus, 1/2 hidden layers would be sufficient enough. But larger dimensions/features count upon 3-5 hidden layers.

It is said that the no. of hidden neurons must be 2/3 the size of the input, plus the size of the output layer. But that’s not always the case, they also depend on other factors like, the complexity of the training, outliers, simplicity and complexity of the dataset etc and etc.

Less number of neurons can lead to underfitting, whereas higher number can cause overfitting like problems. An optimum of all the conditions is the necessity."