Course-1, Week-4, Exercise-9 L_model_backward

I am getting the following assertion error in my code.
----------------------------------ERROR BEGIN----------------------------------------
dA0 = [[ 0. 2.15735098]
[ 0. -1.34961884]
[ 0. -1.32395514]
[ 0. -3.05819414]]
dA1 = [[ 0.57243624 -1.81702513]
[-0.62840214 1.99467189]
[ 0.07375161 -0.23410211]]
dW1 = [[1.69300653 0.32230299 0.56963799 0.43355858]
[0. 0. 0. 0. ]
[0.21812379 0.04152491 0.07339109 0.05585887]]
dW2 = [[-1.65635369 -0.53809236 -0.14346586]]
db1 = [[-0.90851256]
[ 0. ]
[-0.11705105]]
db2 = [[0.60778316]]
Error: Wrong output for variable dA1.
Error: Wrong output for variable dW2.
Error: Wrong output for variable db2.
Error: Wrong output for variable dA0.
Error: Wrong output for variable dW1.
Error: Wrong output for variable db1.
2 Tests passed
1 Tests failed

AssertionError Traceback (most recent call last)
in
9 print("db2 = " + str(grads[‘db2’]))
10
—> 11 L_model_backward_test(L_model_backward)

~/work/release/W4A1/public_tests.py in L_model_backward_test(target)
442 ]
443
→ 444 multiple_test(test_cases, target)
445
446 def update_parameters_test(target):

~/work/release/W4A1/test_utils.py in multiple_test(test_cases, target)
140 print(‘\033[92m’, success," Tests passed")
141 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
→ 142 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
143

AssertionError: Not all tests were passed for L_model_backward. Check your equations and avoid using global variables inside the function.

Expected output:

dA0 = [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]]
dA1 = [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]]
dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167]
[0. 0. 0. 0. ]
[0.05283652 0.01005865 0.01777766 0.0135308 ]]
dW2 = [[-0.39202432 -0.13325855 -0.04601089]]
db1 = [[-0.22007063]
[ 0. ]
[-0.02835349]]
db2 = [[0.15187861]]

------------------------------------ERROR END-------------------------------------
Here’s my understanding of this section of the code:

  1. We compute the derivative dAL for the last layer and use it in the linear_backward function to compute dWL, dbL, dA_prev(L-1).
  2. Then this dA_prev(L-1) is fed to the linear_activation_backward function to compute the dW(L-1), db(L-1) and dA(L-2) and so on…

Can anyone help me figure out the issue that I am facing?

Hi, Dipesh,

Welcome to the community.

Yes, you are right. But, you have to keep in mind that you are using the correct activation to get the right output. Here, the activation starts with the one you performed later during forward pass and conclude with the one you initiated with.
The forward and the backward pass calculates the computational cost in the form of loss function. With each passing layer, you evaluate the cost. The input and the output values must be at the equal footage. The amount of input you have used initially must be attained while calculating the backward propagation.
Avoid using any kind of global variables as it will not give you the right output. The assertion error also mentions this. One thing you have to keep in mind is, the test is failing at: 141 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
So, you need to look at the test cases you are applying while calling the function.