I have just started the Deep learning Specialization and just went through the first open lab C2_W1_Lab01_Neurons_and_Layers. I am a little confused, where the parameters W and b are calculated (where is the model found). Is the tensorflow.keras finding these values? If so each layer has different x values namely x, then a[1] and so forth and thus for every layer it will find another set of W and b, is that right and once these values are found, then it will predict the next a vector (out put values)? Did I understand it correctly?
Hello @Mikias_Alemayehu,
You have posted this thread in the DLS forum, and I have moved it back to the MLS forum for you.
It would be better if you have told us the variable names of the model you are asking about. Since I don’t know which variable it is, I will refer you back to some of the text of the lab that says how the weights are obtained:
In above, the weights are initialized by Tensorflow
Above shows we set the weights to a particular set of values in order to compare the Tensorflow Dense layer’s response to the hand crafted dot product equation’s response.
Above is another case of first getting the weights initialized, then manually set.
In practice, we let Tensorflow to randomly initialize the weights, then we train the weights with our training data. We do not set the weights manually like above, but the reason we did it is simply for us to compare its response to our hand crafted equation.
In practice, all layers’ weights are randomly initialized by Tensorflow. Before training, those weights will enable us to get some meaningless predictions. After training, the weights are adjusted to their optimal values, and the weights will get us some meaningful predictions.
Cheers,
Raymond
Thank you, Raymond. That has made things clearer.