W4_A1_Ex5_Assertion error_L_model_forward test

I have a problem with:
AL, cache = ...

I tried a lot of things, and none has worked. I started to think that I may have used the wrong function or approach.

the error that I get:

AL = [[0.20404114 0.88844537 0.998149  ]
 [0.03048766 0.49417864 0.98510808]
 [0.08821153 0.75035646 0.99511035]]
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 0.
Error: Wrong shape for variable 1.
Error: Wrong shape for variable 2.
Error: Wrong shape for variable 1.
Error: Wrong output for variable 0.
Error: Wrong output for variable 0.
Error: Wrong output for variable 1.
Error: Wrong output for variable 2.
Error: Wrong output for variable 1.
 1  Tests passed
 2  Tests failed
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-71-10fc901e800a> in <module>
      4 print("AL = " + str(t_AL))
      5 
----> 6 L_model_forward_test(L_model_forward)

~/work/release/W4A1/public_tests.py in L_model_forward_test(target)
    315     ]
    316 
--> 317     multiple_test(test_cases, target)
    318 '''        {
    319             "name":"datatype_check",

~/work/release/W4A1/test_utils.py in multiple_test(test_cases, target)
    140         print('\033[92m', success," Tests passed")
    141         print('\033[91m', len(test_cases) - success, " Tests failed")
--> 142         raise AssertionError("Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.".format(target.__name__))
    143 

AssertionError: Not all tests were passed for L_model_forward. Check your equations and avoid using global variables inside the function.

The shape of your AL value is 3 x 3, but that is not correct. The way to debug this is to start with the dimensional analysis. Here’s a thread which walks you through how to do that. Once you see the shapes that should be happening, then you need to compare with what you are getting and figure out what your code is doing wrong.

I solved it and reached Exc 9 where I stuck again.

My error seems to be in
A_prev_temp, dW_temp, db_temp = linear_activation_backward(dAL, current_cache, "relu")

The error: `dA0 = [[ 0.57243624 0. ]
[-0.62840214 0. ]
[ 0.07375161 0. ]]
dA1 = [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]]
dW1 = [[-0.55240952 0.17511096 0.6762397 ]]
dW2 = [[-0.39202432 -0.13325855 -0.04601089]]
db1 = [[-0.2795438]]
db2 = [[0.15187861]]
Error: Wrong shape for variable dA0.
Error: Wrong shape for variable dW1.
Error: Wrong shape for variable db1.
Error: Wrong output for variable dA0.
Error: Wrong output for variable dW1.
Error: Wrong output for variable db1.
1 Tests passed
2 Tests failed

AssertionError Traceback (most recent call last)
in
9 print("db2 = " + str(grads[‘db2’]))
10
—> 11 L_model_backward_test(L_model_backward)

~/work/release/W4A1/public_tests.py in L_model_backward_test(target)
519 ]
520
→ 521 multiple_test(test_cases, target)
522
523 def update_parameters_test(target):

~/work/release/W4A1/test_utils.py in multiple_test(test_cases, target)
140 print(‘\033[92m’, success," Tests passed")
141 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
→ 142 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
143

AssertionError: Not all tests were passed for L_model_backward. Check your equations and avoid using global variables inside the function.`

For hidden layers (in the loop), you have to use relu and for the last layer, you have to use sigmoid. Also, you have the wrong shapes for parameters.

Thanks, I was able to solve it