~/work/release/W4A1/test_utils.py in multiple_test(test_cases, target)
140 print(‘\033[92m’, success," Tests passed")
141 print(‘\033[91m’, len(test_cases) - success, " Tests failed")
→ 142 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
143
AssertionError: Not all tests were passed for L_model_backward. Check your equations and avoid using global variables inside the function.
I went in and took a look at your notebook (thank you for sharing the lab ID before hand!), your cache value in the latter part of ex 9 was wrong, and so was its following line of code. I have left the comments in your notebook.
Only the course staff (e.g. Mubsi) can look at other people’s notebooks. Mubsi is a pretty busy guy, so it’s better not to depend on his superpowers. Why don’t you try showing us the error output you are getting and maybe we can offer advice based on that.
That’s better since the shapes are now correct, but it’s now complaining about the actual values. Getting the shapes correct is a pretty low bar for success … Now you need to take another pass through the code and compare what you wrote to the formulas shown in the instructions. I added some print statements to show the shapes and some of the intermediate values:
Notice that all the values you get are quite a bit different than what I show there. But interestingly, some of your values like dA1, dW2 and db2 in your first post with the wrong shapes actually agree with what I show. Hmmmm. Of course everything happens backwards here: we start with the output layer (layer 2 in this test case). So it looks like that step was correct in your first post, but things go off the rails for layer 1 and layer 0. That should be a clue as to where to look for the issues. But all the values are different in your second post.
The other high level point to make is that we are assuming here that your previous functions like linear_backward and linear_activation_backward passed their test cases. So whatever the problem is, it must be in your L_model_backward logic. A perfectly correct subroutine can still give a wrong answer if you pass it bad data, right? So clarity of thought about where to look for the problem is always a useful thing to avoid wasting effort looking in the wrong places.