Week 1 Gradient Checking gradient_check_n

For the life of me I can’t figure out what I’m doing wrong here. I see in the output, it’s telling me that I’m not using np.linalg.norm for numerator or denominator … but I am! I need to understand what I’m doing wrong because I am losing some momentum here in this course…please help!

There is a mistake in the backward propagation! difference = 2.4577112074727173e-07
Error: Wrong output
 0  Tests passed
 1  Tests failed
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-12-0b57cb811a44> in <module>
      6 assert not(type(difference) == np.ndarray), "You are not using np.linalg.norm for numerator or denominator"
      7 
----> 8 gradient_check_n_test(gradient_check_n, parameters, gradients, X, Y)

~/work/release/W1A3/public_tests.py in gradient_check_n_test(target, parameters, gradients, X, Y)
     56     ]
     57 
---> 58     single_test(test_cases, target)
     59 
     60 def predict_test(target):

~/work/release/W1A3/test_utils.py in single_test(test_cases, target)
    122         print('\033[92m', success," Tests passed")
    123         print('\033[91m', len(test_cases) - success, " Tests failed")
--> 124         raise AssertionError("Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.".format(target.__name__))
    125 
    126 def multiple_test(test_cases, target):

AssertionError: Not all tests were passed for gradient_check_n. Check your equations and avoid using global variables inside the function.

OMG! I hope someone else doesn’t make the silly mistake I made. However, just in case I will humble myself and leave this up for anyone else who does this. Without sharing code, let’s just say that in the for loop we are calculating the grad and gradapprox for every “i-th” example. However, outside the for loop we are using all the training sets. Gulp! Humble pie eating going on here.

3 Likes

This helped me out a lot, thanks @seth.

2 Likes

Hi i am encountering the same problem but I didn’t get it, can you be more specific?

hope this isnt considered as cheating but basically you need to use the WHOLE thing instead of just the i-th value!

4 Likes

If you are getting “Improper number of dimensions to norm” error when computing the difference, try to feed in arrays to np.linalg.norm (put the value in brackets). This worked for me in the previous exercise. In this one, I had to remove the brackets because the values were already arrays.

You are a lifesaver!

There was no way I could have figured out on my own. Thank you!