welcome to the community!
Here you should find a similar thread where the question is answered: Dimensioning a neural network - #2 by AbdElRhaman_Fakhry
In summary:
there is no right or wrong, but you can think about if the dimensional space which is spanned by the neurons would be sufficient to solve your problem. Finally it’s your task to design a good architecture, also with trial and error. I have seen it often, that ML engineers rather tend to increase the feature dimensions (or number of neurons) in the first hidden layer compared to the input layer which can be interpreted as giving the net more opportunity to learn complex and abstract behaviour.
For example you can:
- check and quantify if these feature contain mutual information resp. remove redundant information (w/ Principal Component Analysis (PCA) or Partial Least Square transformation (PLS)) to enhance your ratio of data to features;
- also check the feature importance to focus on the most meaningful ones!
More details on PCA, PLS, importance calculation etc. can be found here: https://github.com/christiansimonis/CRISP-DM-AI-tutorial/blob/master/Classic_ML.ipynb
Please let me know if you have any open questions.
Best regards
Christian