How does different neurons learn different properties. What’s stopping multiple neurons from leaning the same properties and neglecting some completely?
Hey @Ayush_Aman,
Welcome to the community.
I guess we could give the same argument for multiple neurons learning different things, i.e., “What’s stopping multiple neurons from learning different properties?”. And one of the simplest answers to this is initialization with the same value (zero or non-zero).
When the weights are randomly initialized, the neurons start at different points and hence, they learn different features. This is further fuelled by techniques such as dropout. In simple words, any element of stochasticity such as different initial values, use of dropout, etc pushes the neurons to learn different things.
So, if we don’t use any element of stochasticity such as dropout, Initializing the weights with the same value or with different values is what allows the neurons to learn the same properties and different properties respectively.
You can simply validate this by running a small experiment in which you initialize all the weights with the same values, and you don’t use any randomization such as dropout. You will find that in this case, the neurons will end up learning the same properties, and in order to validate this, you can simply compare the weights after a number of iterations. You will find that all the neurons have the same weights (different from the initial weights). And when neurons will have the same weights and the same input, they will essentially compute the same thing.
P.S. - In this discussion, I have excluded the bias term from consideration, but the same analogy as that of weights can be applied to bias as well, with the exception of initialization of bias to the same or different value doesn’t matter much, since initializing with different weights is enough to break the symmetry.
I hope this helps.
Regards,
Elemento
@Ayush_Aman , If you want an explanation with some simple maths:
Thank you @Elemento. Cleared all my confusions.
Insightful. Thank you @rmwkwok.