Hi there,
there is no right or wrong, but you can think about if the dimensional space which is spanned by the 2 neurons would be sufficient to solve your problem. Finally it’s your task to design a good architecture, also with trial and error.
I have seen it often, that ML engineers rather tend to increase the feature dimensions (or number of neurons) in the first hidden layer compared to the input layer which can be interpreted as giving the net more opportunity to learn complex behaviour.
I think this thread can be interesting for you:
Please let me know if it answers your question.
Best regards
Christian