Why Tensorflow outputs the initial, randomly initialized weights?

Hi everyone,

I’ve just started the second course on the Machine Learning Specialization, and there’s something that I don’t understand well in the optional lab “C2_W1_Lab01_Neurons_and_Layers”. Let me use a screenshot for better understanding:

On the first step (1 in the screenshot), the neural network with 1 neuron outputs small values of w and a zero value for b. Only in step 2 (2 in the screenshot) we enter the proper numbers by hand and then it works nice.

Why does that happen? Hasn’t the neuron network been trained in step 1? It’s the model itself who should find the proper values of w and b, instead of us, right?

I’m sure there’s something I’m missing, but I can’t find what.

Thanks!

There is no training involved in this assignment. The weights you get at 1) are the default weights that are created with the layer.

Training is considered next week.

1 Like

I see! Then it makes sense. Thanks, @TMosh ! :grin: