Hi everyone,

In the most basic Neural Network introduced in week 1, every neural in a layer receives the same information from the previous layer, so what is the point of divide them into neurals? Aren’t they just compute the same value?

At first, I think of a neural network as a way to divide the received information into multiple factors to be considered in a more complex way, but now it is a little bit confusing.

Thank you.

Hi @francesco4203,

Neurons in the same layer compute to the same value only if they are initialized to have the same set of weights. Otherwise, they can differentiate into looking at the input features in different ways.

Cheers,

Raymond

1 Like

RIght! The point is that we must initialize all the neurons differently before we start the training process for precisely that reason. This is called Symmetry Breaking and is accomplished by doing the initializations with random values, but there are various algorithms for doing this that are used in different cases. I have not taken MLS, so I’m not sure how much Prof Ng discusses that in the MLS courses, but he does cover it in DLS Course 1 and Course 2.

1 Like