In the tutorial of week 3 “Shallow Neural Network”, tutorial Computing a Neural Network’s Output:
It is shown that the matrix size of w[1] is (4 x 3), timestamp 4:07.
Is it because the input that we’re providing is 3?
If someone could explain it a bit better, it would be really helpful for me to understand.
If you go back to the beginning of the lecture video when Prof Ng introduces the network representation, you can see that each node in layer 1, the hidden layer, is connected to the inputs, X_1, X_2, X_3; and there are 4 nodes. So the weight matrix for layer 1 is of size (4,3).
Here’s a thread which explains what is going on there in more details.
Thank you for the explanation Kic.
Thank you Paul, the thread really helped.
So, could you please provide the complete 4*3 matrix. It is confusing since he is again multiplying with the column matrix having [X1 X2 X3].
Have you read the thread that I linked in my previous reply on this thread?
Also please note that this is a more than 2 year old thread, so there is no guarantee in general that the participants are still listening.
It is a bold move, to post on a thread that has been cold for two years.