Any guidance would be appreciate. Thanks
Looks like the shape of AL is not correct. The shape of test data is (5,4).
If you look at the description of “linear_activation_forward(A_prev, W, b, activation)”, inputs are;
A_prev – activations from previous layer (or input data): (size of previous layer, number of examples)
The 2nd dimension shows the number of examples. So, it will be kept to the end, and the shape of AL is (1,4). But, the shape of your AL is (1,2).
The operation flow is,
Input shape (5,4) →
First Layer : Linear (W1 shape =(4,5), b1 shape =(4,1)), Relu → Output A shape = (4,4)
2nd Layer : Linear (W2 shape=(3,4), b2 shape=(3,1)), Relu → Output A shape = (3,4)
Last Layer : Linear (W3 shape=(1,3), b3 shape=(1,1)), Sigmoid → Output AL shape = (1,4)
Checking the dimension of each output may help you to understand what happened.
How can i check the dimension of each output ? I had used linear_activation_forward(A_prev, W, b, activation) already so i couldn’t understand how to fix this problem. Could you please clarify a bit more.
If a variable is an array, you can get the dimension (shape) by adding “.shape” to that variable, which you did many times already in a series of exercises. For example, if the name of an array is “a”, then, “a.shape” should get the shape of “a”. You can just add “print” whatever you want to check. If you add,
print(A.shape)
Just after you call “linear_activation_forward(…)”, then, you can easily see the output dimensions.
Note that “shape” works for “array”, but not for “list”. But, this is not for this case.
Hi Kotchaporn,
Nobu has given you an explanation on why you aren’t getting the right output. Have a look on the list you are calling for AL.
the variable AL
will denote 𝐴[𝐿]=𝜎(𝑍[𝐿])=𝜎(𝑊[𝐿]𝐴[𝐿−1]+𝑏[𝐿])A[L]=σ(Z[L])=σ(W[L]A[L−1]+b[L])
It might also help to start by doing the “dimensional analysis” on this test case, so that you know what you should be getting for shapes at each layer. Here’s a thread which shows how to do that.