C2_W1_Neural network implementation in Python: possible error?

Hey guys, I was following the course and believe there might be a slight mistake in the video Forward prop in a single layer, in the Neural network implementation in Python section. In the following screenshot, I have marked in red the w2_1 array. I think the array should have three components, instead of two, as the a1 array has three numbers, so it should have three corresponding weights.

Sorry if I misunderstood something, and the code is correct. I point it out in case someone might get confused by it.

1 Like

Hello @Baste,

Welcome to the community. The number of elements in w1_2 (and also w1_1 and w1_3) is equal to the number of elements of its input x, and the number of elements in a1 will determine the number of elements in the weight of the next layer such as w2_1.

So, you are correct in saying that w2_1 should have 3 elements, because its input a1 has 3 elements (because layer 1 has 3 units of node). Thanks a lot for pointing out this issue. I will report this right away.

Cheers!

1 Like

I noticed the same when I watched the video just now, and thought someone should have highlighted this in the forums, then found your post. Thank you.

1 Like

Hi @ahmed.m, you are welcome! This is your first post, so welcome to here!

Let me also take this chance to say thank you to @Elemento, because actually he confirmed that this was a problem, and had corrected me about it.

Cheers,
Raymond

3 Likes

On this slide, should the notation for;

z1_1 = np.dot(w1_1,x) + b

include b1_1 instead of only b? Unless the same bias is being used for every neuron in layer 1?

Yes, and this has been reported to the course team. Thank you for letting us know.

Raymond