General implementation of forward propagation

In this video there is a function called a_in which is 2-d array. my question is that if there’s more neuron in the previous layer then a_in can be different but in the next layer w will be the same what if the w & a_in’s dimensions doesn’t match so how can be perform dot product on that?

Hello @khushal_vanani,

If the shapes do not match, the dot product cannot be performed. When Tensorflow builds a neural network and count how many weights are needed in each layer, it takes into account the following: number of features of your input data, and number of neurons configured by you. This is how tensorflow makes sure the shapes must match.

However, if you promised (by configuration) Tensorflow to give it 3 features per input sample, but later you give 2 or 7, then tensorflow will gives you an error and ask for 3 features.

Raymond