Neural Networks Weights

How we get different type of weight parameters when we are feeding same parameters to different neurons in each layer??.

The reason that different neurons learn different patterns is that we start by randomly initializing all the weight values. This is called “Symmetry Breaking”. You filed this under “General Discussions”, so I’m not sure which course or specialization you are asking about, but here is a thread from DLS which talks about Symmetry Breaking.

2 Likes

In machine learning we initialize weights as 0 but in deep learning we initialize weights by different random values that’s why different neurons will learns different features. is it right?

The thread that I linked earlier covers the difference between Logistic Regression (which does not require Symmetry Breaking) and Neural Networks, which do require Symmetry Breaking.

2 Likes

Thank you!. for clarifying my doubut