Weights in each layer

When creating a post, please add:

  • Week1# must be added in the tags option of the post.
  • Link to the classroom item you are referring to:
  • Description (include relevant info but please do not post solution code or your entire notebook):

In neural network model, we calculate activation in current layer for next layer. But how do we get weights in current layer?

Hello @MOXIAO_LI,

When we set up the layer, we initialize its weights to some values, which are what we use for calculating the “activation” in the first round of gradient descent.

Cheers,
Raymond

1 Like

Sorry, I am still a little confused. So, when we set up a neural network model before we fit data in the model, there are some random weights. This is forward propagation. Then, we feed our data into the model and the model will do gradient descent to modify/learn the optimal parameters/weights. It is backward propagation.

Is that what you mean?

1 Like

Yes, but let me rearrange a bit of what you said into the following order:

  1. When setting up a NN, initialize the weights
  2. When forward propagating, compute activations with the weights
  3. When backward propagating, compute the gradients and update the weights.

Cheers,
Raymond

I am still a little confused. It is unclear why each neuron gets different weights after the training. As far as I understand, when we start the forward propagation process for each cell, we begin with some values for vector w and the bias. We may select different initial weight values for each cell, but why do they end up with different weight values? I mean, they all use the same input values, and it is not clear to me why they may converge to different values even if the initial weights are different. Maybe I am missing something in the backward propagation process, or some dependencies between the computations of each cell?