Is this the correct way of weight when there is a two layer neural network

if you check the superscript of W which denotes the layer number, according to the notebook the input and the hidden layer have similar superscripts

@TMosh please have a look


What exactly would you like me to comment on?

what i need you to check is that, if you look the square brackets in W and Z it states the layer of the neural network right? but if you check the formula the superscript is all similar which is one while the lower one is changing so, is that the correct way?.


Hello @Mohammad_Omar_Adde,

The text focuses on the first and the second perceptron, which are all in the first layer, so the equations are for the first layer.

Perhaps if the text switches to talk about the third perceptron, then there will be another equation that uses 2 in the superscripts of the symbols.


1 Like

oky that’s cool, so the hidden layer still counts as one, am i right?

This network has only one hidden layer, and one output layer. The index of these layer starts from 1. I guess this is what you are asking?

1 Like

yeah thanks