Machine Learning specialization C2_W1,problem with tensor flow implementation

Hi, I have almost completed week 1 of course 2 from ML Specialization, but there is one thing I do not understand while implementing neural networks. How do they decide as to which type of work done by which neuron. For example, in coffee roasting model, neuron 1 of layer 1 has bad roast region when temp is too low, neuron 2 when time is short and neuron 3 with bad combination of temp and time. My question is that how does it decide as to which neuron to look for which condition because the training data and gradient descent algorithm for all 3 of neurons is same and how does it avoid repetition of condition for neurons?

You can not decide before hand which neuron will decide which part of the learning process. It just happens that in this very simple NN you can see each neuron’s indidual contribution.

That being said as a general principle, in such simple NN by intitializing the weights for each neuron to a certain number you may obtain some control of the learning path of that neuron, but its not feasible for larger NNs.

Since all the neurons goes therough the same procedure of gradient descent to find parameters then why do they differ in learning process?

Since each neuron in a NN receives input from the previous layers, but they may differ in terms of their weights and biases. As the network learns, the weights and biases are adjusted differently to improve the overall performance of the network by minimizing the overall cost function. This means that each neuron may become specialized in certain types of input.

Edited:- You may also refer to this thread and comment, which discusses the behavior of neural networks. It could provide additional insight into the topic we are discussing.