Week 4, Exercise 5 - L_model_forward

Hi!, i don´t know which parameters i should put in the forward function, for the ReLU, it’s supposed to get A and cache, with A_prev, t_W, t_b, activation, right? or am i wrong?, and for the sigmoide to get AL and cache, the parameter A it is supossed to be the A obteined in the ReLU in the last iteration (2), but i still getting this error, if someone can give me some help i’d be so gratful.

ValueError: shapes (1,3) and (5,4) not aligned: 3 (dim 1) != 5 (dim 0)

i’m so lost, i just don´t know how to call the parameters for the function, W1, W2, b1, b2, and for the sigmoide either. Please help, i’m stuck.

Hi @paolaruedad , let me see if I can help you and give you a hint. Your reasoning about the parameters in your first response seems to be allright, so you are on the right track, no worries.

My guess is you are in exercise 5 of the first assignment of week 4? Are you able to share a snipit of your code and the output so I can give you a hint?

regards Stephanus

Thank you for answer me, i have already fix the error, thank you so much, i wasn’t taking the parameters rigth, once i realized it, the error go away.

I had the same problem with you. Can you share your solution? Thanks

I am also getting the shape error, I am really confused

<ipython-input-28-10fc901e800a> in <module>
      1 t_X, t_parameters = L_model_forward_test_case_2hidden()
----> 2 t_AL, t_caches = L_model_forward(t_X, t_parameters)
      3 
      4 print("AL = " + str(t_AL))
      5 

<ipython-input-27-ec161e1a25d7> in L_model_forward(X, parameters)
     37     # caches ...
     38     # YOUR CODE STARTS HERE
---> 39     AL, cache = linear_activation_forward(A, parameters['W' + str(l)], parameters['b' + str(l)], activation = 'sigmoid')
     40     caches.append(cache)
     41 

<ipython-input-25-86db2fd9a9de> in linear_activation_forward(A_prev, W, b, activation)
     22         # A, activation_cache = ...
     23         # YOUR CODE STARTS HERE
---> 24         Z, linear_cache = linear_forward(A_prev, W, b)
     25         A, activation_cache = sigmoid(Z)
     26 

<ipython-input-23-f8c2ae416f66> in linear_forward(A, W, b)
     18     # Z = ...
     19     # YOUR CODE STARTS HERE
---> 20     Z = np.dot(W, A) + b
     21 
     22     # YOUR CODE ENDS HERE

<__array_function__ internals> in dot(*args, **kwargs)

ValueError: shapes (3,4) and (3,4) not aligned: 4 (dim 1) != 3 (dim 0)```

The error is pointing you in the right direction, the shapes of W and A are (3, 4). Then remember what is the matrix multiplication rule (see below from Wikipedia):

image

Can you multiply them as they are?

I think that it’s the A matrix that’s not in the right dimension. The A matrix’s dimension should be (4, 1) instead of (3, 4) for Z to be (4, 1).

How I can’t assume that A is not in the right dimension. So I have to check W to make sure that it has the correct shape to be able to multiply it to A

Ok it’s all good now!

Good to see you solved it!

Hello Sir,

i don’t know whether i have solved this part correctly.

{moderator edit - solution code removed}

please help me out here. i don’t which code write in order to implement LINEAR → SIGMOID.

Thank you for your help.

1 Like

The second call to linear_activation_forward should be similar to the first one, remember how it is defined.

def linear_activation_forward(A_prev, W, b, activation):

Got it.
Thank you for your help!

I got the same problem and I am defining the second function AL,cache as the same one but putting sigmoid instead of relu but it keeps telling me its dimension error, can you please help me solve it, I have been trying since yesterday

Hi @Mohammad_Ali_Chamass, Yes, the parameters would almost be same as (A, cache) above except for A. How are you calling it for A need to be checked. Additionally, you must call the activation for ‘sigmoid’ which in your case seems you have done the same. Thanks!

Hello @Rashmi , I tried several ways calling A but I didn’t find any solution to it till now, ‘A’ calculated above for the RELU function should be the output that should be passed for ‘AL’, cache if I am not mistaken, so I am calling AL, cache = linear_activation_forward(A, and the same parameters as A,cache)

Hi, yeah, that seems correct. Maybe then you haven’t run the cells and they are taking the previous entered variables. You need to run all the cells above to run this current cell again. Hope, you would reach your destination then. Thanks!

The thing now is I am getting the same output as the one expected but they are telling me this: AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 1.
Error: Wrong shape for variable 2.
Error: Wrong shape for variable 1.
Error: Wrong output for variable 0.
Error: Wrong output for variable 1.
Error: Wrong output for variable 2.
Error: Wrong output for variable 1.

The parameters I am calling in A,cache are like that: parameters[‘W’ + str(l)] and same for b along with the activation function “relu” and in AL,cache i am calling them the same except that i am going to l+1 as i want to access the last layer, is that correct?

Hi,

You are doing a forward prop to attain the output. Check the parameters carefully while calling the codes for A, cache and AL, cache considering the fact they both will lead different activations for string of layers (l) and (L). I think you are not justifying the right codes somewhere and that’s why you are getting this problem.

Go through public.utils.py file by clicking File–>Open and you will get an idea then.

Thanks!