Gradient_check_n_test partial failure. Not all tests were passed for gradient_check_n

So I made sure that the code uses the proper theta[i] + and - epsilon
I have the correct paren for

[Removed solution code]

The results are a bit confusing. BTW I corrected the built in errors in backprop_n .

Your backward propagation works perfectly fine! difference = 6.421267143641835e-08
Error: Wrong output
0 Tests passed
1 Tests failed


AssertionError Traceback (most recent call last)
in
6 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”
7
----> 8 gradient_check_n_test(gradient_check_n, parameters, gradients, X, Y)

~/work/release/W1A3/public_tests.py in gradient_check_n_test(target, parameters, gradients, X, Y)
56 ]
57
—> 58 single_test(test_cases, target)
59
60 def predict_test(target):

~/work/release/W1A3/test_utils.py in single_test(test_cases, target)
122 print(’\033[92m’, success," Tests passed")
123 print(’\033[91m’, len(test_cases) - success, " Tests failed")
→ 124 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
125
126 def multiple_test(test_cases, target):

AssertionError: Not all tests were passed for gradient_check_n. Check your equations and avoid using global variables inside the function.

Took too long to debug this. Any help appreciated.

Hi, @Gian.

Think about the shapes of grad and gradapprox? What are you computing if you index into them? :slight_smile:

1 Like

It’s a conspiracy I say! LOL. Thanks. Nramon. Sorry about the code but I had to give you info about the entire context.

2 Likes

:joy: No worries. Glad I could help.

1 Like