Hi There, Professor Ng showed a picture of a general neural network that took 4 inputs, had 3 neurons in a hidden layer before outputting a single output (price prediction). He noted that each Neuron takes in all 4 inputs and runs it through an activation function (ReLu as an example). This was the housing price predictor example. I’m having difficulty understanding what process causes all these neurons to become “unique”, or interpret the 4 provided inputs in a distinct way, if every neuron sees all the same inputs and uses the same activation function.

Is there some process that instantiates the 4 neurons with some randomness such that they eventually converge differently?

Perhaps I’m asking this question prematurely. Would appreciate any help in understanding this better! Attaching the picture of the diagram he drew (or at least trying to- this is my first post)