W4_Ex-5_L_model_Forward_output layer

I had written the code according to the instructions given, But it was throwing a error. Please help me in finding out. The error is at exercise 5 L_mode

{moderator edit - solution code removed}

l forward. Thanks in Advance

If your linear_activation_forward function passed its tests (I am assuming that it did), then your problem is with your invocation of that function in the L_model_forward function. The docstring of the former function (as all good function documentation does) describes the arguments to the function. Here is the one that bites in this case:

activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"

I will add emphasis to the text string part.

P.S. In the future, please refrain from posting your code as it is against the honor code. Include only the traceback. Thanks much!

Thanks For The Help​:heart:

Hello am having an error and l am stuck here.

In which exercise you are getting this error?

The shape of W and A, in my case, is (1, 3) and (3, 2) respectively, which is different from yours. So, I guess, the error is not pointing to linear_forward function. Please share your full error.

1 Like

Wekk 4 Exercise 5 l am getting different dimensions .

Share full error please

Hint: For the last layer, we use L, not l.

The dimension of A still remains (5,4)

Share your full error, please.


For the last layer, we use activation of that layer, not the previous layer, right? Also note that the prewritten code for last layer is:

AL, cache = . . .
caches . . .

Don’t change the left-hand sides…

Ok l print A and i get shape (5,4) now it means i will get a dimesional error since for the first layer A = X right?

Yes, for the first layer, A = X, for hidden layers, A = A_prev. What is it for the last layer?

I think for the last layer its A = AL-1

No. AL is the output of the last layer, not the input to the last layer.

In for loop, we set the output of each hidden layer as A, right? So, what is the output of the last hidden layer (2nd last layer)? That is the input to the last layer.

A(L-1) is the input of the last layer


Consider the below example:

for i in range(1, 100):
    V = Something

The output of i = 1 or any other value is V.

1 Like

OK, for the last layer we use the activation of that Layer which is AL right?