HI everyone!
As I learned from Prof. A.J. the more complex the model is; the more flexible it would be.
when we are increasing the number of hidden neurons; we are increasing the flexibility of the final model. So we expect the model to fit dataset better and lead to higher accuracy over training data set(even if this accuracy is the result of overfitting).
when I tested different hidden layer sizes using Gaussian dataset(one of the extra available datasets for practicing more); I found something different which was strange to me. the results are as follows.
as it is clear based on the above image, when we the hidden layer is populated with 50 neurons, the model has less flexibility with respect to when it has only 5 neurons.
I expected the result of 50 neurons case, be a boundary with a lot curvature in it. but it was not!!
anyway… I would be really grateful if someone can explain me the probable reasons.
Test on noisy moon data set also has such problem.