Problem with L_model_forward

Hello, I have problem with L_model_forward
TypeError Traceback (most recent call last)
in
1 t_X, t_parameters = L_model_forward_test_case_2hidden()
----> 2 t_AL, t_caches = L_model_forward(t_X, t_parameters)
3
4 print("AL = " + str(t_AL))
5

in L_model_forward(X, parameters)
27 # caches …
28 # YOUR CODE STARTS HERE
—> 29 A, cache =linear_activation_forward(A_prev, parameters, parameters, activation = “relu”)
30 caches.append(cache)
31

in linear_activation_forward(A_prev, W, b, activation)
31 # A, activation_cache = …
32 # YOUR CODE STARTS HERE
—> 33 Z, linear_cache =linear_forward(A_prev, W, b)
34 A, activation_cache = relu(Z)
35 # YOUR CODE ENDS HERE

in linear_forward(A, W, b)
19 # YOUR CODE STARTS HERE
20
—> 21 Z =np.dot(W,A)+b
22
23 # YOUR CODE ENDS HERE

<array_function internals> in dot(*args, **kwargs)

TypeError: unsupported operand type(s) for *: ‘dict’ and ‘float’
Please tell me, what I must do.
Thank for your time

Hello @plohih_love,

Let’s analyze the error traceback :slight_smile: ! It can give us a lot of ideas on what to check.

Let’s start from bottom up.

image

This said we are multiplying a dict with a float which is apparently a problem, because dict is not a number. I guess as soon as we realize this, we might already have some ideas on what’s happened, right? We do use dict to store some numbers, right?

If you still have no idea where is the problem, then let’s look at the next one:

image

This says the erroreous multiplication happens inside np.dot. In particular, the error messages said “dict and float” and this ordering has meaning! We did np.dot(W, A), so W is a dict and A is a float. Our problem is in W!

linear_activation_forward is shown twice. The first one is where we have called it. The second one is how we defined it! From the definition, the second parameter is our W, and look at what we have passed? The parameters! Is that the dict object that we are looking for?

Should we have passed parameters as W? And if you look also at what we have passed as b, it is also parameters! parameters is a dict, and we need to get the W and the b that stored inside the parameters and pass them instead!

I suggest you to study your code again, and see how you can get the W and b out of parameters and pass them properly into linear_activation_forward.

Best of luck!

Raymond

2 Likes

Thank for your answer
Now passed parameters as:
for l in range(1, L):
A_prev = A
W=parameters[‘W’ + str(l)]
b=parameters[‘b’ + str(l)]
Now throws an error:
AL = [[0.90589057 0.75020632 0.05200643 0.82351754]
[0.99823392 0.08462048 0.01610661 0.98885794]
[0.9999688 0.33941221 0.83703792 0.99971951]]
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 1.
Error: Wrong shape for variable 2.
Error: Wrong shape for variable 1.
Error: Wrong output for variable 0.
Error: Wrong output for variable 0.
Error: Wrong output for variable 1.
Error: Wrong output for variable 2.
Error: Wrong output for variable 1.
1 Tests passed
2 Tests failed

AssertionError Traceback (most recent call last)
in
4 print("AL = " + str(t_AL))
5
----> 6 L_model_forward_test(L_model_forward)

~/work/release/W4A1/public_tests.py in L_model_forward_test(target)
238 ]
239
→ 240 multiple_test(test_cases, target)
241 ‘’’ {
242 “name”:“datatype_check”,

~/work/release/W4A1/test_utils.py in multiple_test(test_cases, target)
140 print(‘\033[92m’, success," Tests passed")
141 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
→ 142 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
143

AssertionError: Not all tests were passed for L_model_forward. Check your equations and avoid using global variables inside the function.

Hello, Plohih.

Look at what Mentor Raymond had suggested you. The cell is failing to pass the ‘test cases’ now. The assertion error shows the presence of some kind of global variables used within the function. Read your code again and try identifying the error.

141 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
→ 142 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”
1 Like

Yes, notice in particular that the shape of your AL value is wrong. It is 3 x 4. But it should be 1 x m, where m is the number of samples, right?

Here’s a thread which takes you through the dimensional analysis for the “2hidden” test case.

1 Like

Thank you for your reply. I figured out why AL has a size of 3x4. The loop does not calculate the second hidden layer, but immediately proceeds to the calculation of AL with the parameters of the second hidden layer

A = [[-0.31178367 0.72900392 0.21782079 -0.8990918 ]
[-2.48678065 0.91325152 1.12706373 -1.51409323]
[ 1.63929108 -0.4298936 2.63128056 0.60182225]
[-0.33588161 1.23773784 0.11112817 0.12915125]
[ 0.07612761 -0.15512816 0.63422534 0.810655 ]]
A = [[0. 3.18040136 0.4074501 0. ]
[0. 0. 3.18141623 0. ]
[4.18500916 0. 0. 2.72141638]
[5.05850802 0. 0. 3.82321852]]
AL = [[0.90589057 0.75020632 0.05200643 0.82351754]
[0.99823392 0.08462048 0.01610661 0.98885794]
[0.9999688 0.33941221 0.83703792 0.99971951]]
A = [[-0.31178367 0.72900392 0.21782079 -0.8990918 ]
[-2.48678065 0.91325152 1.12706373 -1.51409323]
[ 1.63929108 -0.4298936 2.63128056 0.60182225]
[-0.33588161 1.23773784 0.11112817 0.12915125]
[ 0.07612761 -0.15512816 0.63422534 0.810655 ]]
A = [[0. 3.18040136 0.4074501 0. ]
[0. 0. 3.18141623 0. ]
[4.18500916 0. 0. 2.72141638]
[5.05850802 0. 0. 3.82321852]]
A = [[-0.31178367 0.72900392 0.21782079 -0.8990918 ]
[-2.48678065 0.91325152 1.12706373 -1.51409323]
[ 1.63929108 -0.4298936 2.63128056 0.60182225]
[-0.33588161 1.23773784 0.11112817 0.12915125]
[ 0.07612761 -0.15512816 0.63422534 0.810655 ]]
A = [[0. 3.18040136 0.4074501 0. ]
[0. 0. 3.18141623 0. ]
[4.18500916 0. 0. 2.72141638]
[5.05850802 0. 0. 3.82321852]]
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 1.
Error: Wrong shape for variable 2.
Error: Wrong shape for variable 1.
A = [[-0.31178367 0.72900392 0.21782079 -0.8990918 ]
[-2.48678065 0.91325152 1.12706373 -1.51409323]
[ 1.63929108 -0.4298936 2.63128056 0.60182225]
[-0.33588161 1.23773784 0.11112817 0.12915125]
[ 0.07612761 -0.15512816 0.63422534 0.810655 ]]
A = [[0. 3.18040136 0.4074501 0. ]
[0. 0. 3.18141623 0. ]
[4.18500916 0. 0. 2.72141638]
[5.05850802 0. 0. 3.82321852]]
Error: Wrong output for variable 0.
Error: Wrong output for variable 0.
Error: Wrong output for variable 1.
Error: Wrong output for variable 2.
Error: Wrong output for variable 1.
1 Tests passed
2 Tests failed

I set iteration of loop parameters as:
for l in range(1, L):
A_prev = A
W=parameters[‘W’ + str(l)]
b=parameters[‘b’ + str(l)]

The idea for the loop logic looks correct, but that’s not all you have to do, right? After the loop you have to handle the output layer separately, because it uses a different activation function.

Run this code and watch what happens:

for ii in range(1,5):
    print(f"ii = {ii}")

print(f"After loop ii = {ii}")
1 Like

Thank you for your reply. I figured out why AL has a size of 3x4. The loop does not calculate the second hidden layer, but immediately proceeds to the calculation of AL with the parameters of the second hidden layer

A = [[-0.31178367 0.72900392 0.21782079 -0.8990918 ]
[-2.48678065 0.91325152 1.12706373 -1.51409323]
[ 1.63929108 -0.4298936 2.63128056 0.60182225]
[-0.33588161 1.23773784 0.11112817 0.12915125]
[ 0.07612761 -0.15512816 0.63422534 0.810655 ]]
A = [[0. 3.18040136 0.4074501 0. ]
[0. 0. 3.18141623 0. ]
[4.18500916 0. 0. 2.72141638]
[5.05850802 0. 0. 3.82321852]]
AL = [[0.90589057 0.75020632 0.05200643 0.82351754]
[0.99823392 0.08462048 0.01610661 0.98885794]
[0.9999688 0.33941221 0.83703792 0.99971951]]
A = [[-0.31178367 0.72900392 0.21782079 -0.8990918 ]
[-2.48678065 0.91325152 1.12706373 -1.51409323]
[ 1.63929108 -0.4298936 2.63128056 0.60182225]
[-0.33588161 1.23773784 0.11112817 0.12915125]
[ 0.07612761 -0.15512816 0.63422534 0.810655 ]]
A = [[0. 3.18040136 0.4074501 0. ]
[0. 0. 3.18141623 0. ]
[4.18500916 0. 0. 2.72141638]
[5.05850802 0. 0. 3.82321852]]
A = [[-0.31178367 0.72900392 0.21782079 -0.8990918 ]
[-2.48678065 0.91325152 1.12706373 -1.51409323]
[ 1.63929108 -0.4298936 2.63128056 0.60182225]
[-0.33588161 1.23773784 0.11112817 0.12915125]
[ 0.07612761 -0.15512816 0.63422534 0.810655 ]]
A = [[0. 3.18040136 0.4074501 0. ]
[0. 0. 3.18141623 0. ]
[4.18500916 0. 0. 2.72141638]
[5.05850802 0. 0. 3.82321852]]
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 1.
Error: Wrong shape for variable 2.
Error: Wrong shape for variable 1.
A = [[-0.31178367 0.72900392 0.21782079 -0.8990918 ]
[-2.48678065 0.91325152 1.12706373 -1.51409323]
[ 1.63929108 -0.4298936 2.63128056 0.60182225]
[-0.33588161 1.23773784 0.11112817 0.12915125]
[ 0.07612761 -0.15512816 0.63422534 0.810655 ]]
A = [[0. 3.18040136 0.4074501 0. ]
[0. 0. 3.18141623 0. ]
[4.18500916 0. 0. 2.72141638]
[5.05850802 0. 0. 3.82321852]]
Error: Wrong output for variable 0.
Error: Wrong output for variable 0.
Error: Wrong output for variable 1.
Error: Wrong output for variable 2.
Error: Wrong output for variable 1.
1 Tests passed
2 Tests failed

where can I continue looking for errors

Thank you for your reply.
my code: print(f"After loop ii = {ii}") , looks like:
A = [[ 2.2644603 1.09971298 0. 1.54036335]
[ 6.33722569 0. 0. 4.48582383]
[10.37508342 0. 1.63635185 8.17870169]]
A is size 3x4.
the error was below in AL in the variable of which I wrote A_prev instead of A
thanks for your help

1 Like