From the lecture what I understood is, each of the neuron in the NN has the same identical activation function(please correct me if I am wrong) but each of the neuron learns a different feature from the set of incoming input features.
For example in a face recognition model, in the 1st hidden layer the 1st neuron might be learning a vertical line, the 2nd neuron learning an oriented line and so on and the neurons in the deeper layer are learning some more complex features but all neurons at each of he hidden layers are connected to all the incoming input features from the previous layer which is same for the 1st hidden layer(here all the neurons and connected to the same all input features).
So my questions is how, each of the neuron learns a different feature even though one of them have same identical activation function and have same input values from he previous layer?