Hello
Just wondering for this lecture where the values w1_1= [1,2], w1_2=[-3,4],w1_3=[5,-6] and b1_1=[-1], b1_2=[1], and b1_3=[2] come from. Are the first weights and bias in first layer random values which are then fed into the second layer?
Thanks
Eric
I believe they’re just numbers pulled from thin air as an example.
Since this is just demonstrating how forward propagation works, it’s handy to have some real values to work with.
The actual weight values won’t be nice convenient integers. They’ll come from the NN training process.