Gradient Checking

I have completed all the steps still I am getting this error

There is a mistake in the backward propagation! difference = 1.0
Error: Wrong output
0 Tests passed
1 Tests failed

AssertionError Traceback (most recent call last)
in
6 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”
7
----> 8 gradient_check_n_test(gradient_check_n, parameters, gradients, X, Y)

~/work/release/W1A3/public_tests.py in gradient_check_n_test(target, parameters, gradients, X, Y)
56 ]
57
—> 58 single_test(test_cases, target)
59
60 def predict_test(target):

~/work/release/W1A3/test_utils.py in single_test(test_cases, target)
122 print(’\033[92m’, success," Tests passed")
123 print(’\033[91m’, len(test_cases) - success, " Tests failed")
→ 124 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
125
126 def multiple_test(test_cases, target):

AssertionError: Not all tests were passed for gradient_check_n. Check your equations and avoid using global variables inside the function.

2 Likes

I have the same issue

2 Likes

Hi, @Manasvi:

6 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”

There seems to be something wrong in the way you computed either numerator or denominator. Did you use np.linalg.norm? Double-check that first and let me know if you can’t find the problem :slight_smile:

Did you get the exact same error, @lufrocea?

3 Likes

I get this error as well. I’ve attempted to rewrite the code and check for typos, but I think it should match the steps described in the task

I had the same problem and thought it was something to do with usage of np.linalg.norm. But it turned out I was using the wrong operation in denominator (- instead of +). You also should check carefully the dimension of matrices

5 Likes

Fixed for @Erlend. Good luck with the rest of the course :slight_smile:

A common mistake I’ve seen is using the same expression to compute theta_plus[i] and theta_minus[i], probably because it was copied and pasted.

6 Likes

yes, it is the same error.


AssertionError Traceback (most recent call last)
in
6 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”
7
----> 8 gradient_check_n_test(gradient_check_n, parameters, gradients, X, Y)

~/work/release/W1A3/public_tests.py in gradient_check_n_test(target, parameters, gradients, X, Y)
56 ]
57
—> 58 single_test(test_cases, target)
59
60 def predict_test(target):

~/work/release/W1A3/test_utils.py in single_test(test_cases, target)
122 print(’\033[92m’, success," Tests passed")
123 print(’\033[91m’, len(test_cases) - success, " Tests failed")
→ 124 raise AssertionError(“Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.”.format(target.name))
125
126 def multiple_test(test_cases, target):

AssertionError: Not all tests were passed for gradient_check_n. Check your equations and avoid using global variables inside the function.

I don’t have any global variables inside the function.

Hi, @lufrocea.

Check the expression for gradapprox[i] in gradient_check_n and be careful with operator precedence. Hint: ()

4 Likes

Thanks!!! It worked. I found the error

1 Like

I’m getting the same error and none of the other solutions have worked for me. I’ve double checked that I’m using np.linalg.norm as well as the precedence for the difference. Any suggestions as to what else I can check for?

Fixed for @srkrish2 too. The error was caused by this.

Good luck with the rest of the course!

1 Like

I’m getting the same error and I have checked that I’m using np.linalg.norm as well as the precedence for the difference. I dont know what else to check for. My theta_plus[i] is different from my theta_minus[i]. What can I do to check?

Resolved. Was a similar case to Ruchi’s-I was not subtracting epsilon in theta_minus

3 Likes

Hi,
I am getting similar error message, although the result is good as well as the shape of the “difference” variable. Where else may I check? Thank you.

<class ‘numpy.ndarray’>
Your backward propagation works perfectly fine! difference = [[1.1890913e-07]]


AssertionError Traceback (most recent call last)
in
5 difference = gradient_check_n(parameters, gradients, X, Y, 1e-7, True)
6 expected_values = [0.2850931567761623, 1.1890913024229996e-07]
----> 7 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”
8 assert np.any(np.isclose(difference, expected_values)), “Wrong value. It is not one of the expected values”

AssertionError: You are not using np.linalg.norm for numerator or denominator

1 Like

That is not the correct type for the difference value. It should be a numpy scalar of type np.float64. Your value is (obviously) a 1 x 1 numpy array. So how did that happen? Maybe some “keepdims” parameters where they weren’t needed?

Here’s a little sample code to show the type of the norm:

np.random.seed(42)
R = np.random.randn(1,4)
print(R)
print(R.shape)
ns = np.linalg.norm(R)
print(f"ns = {ns}")
print(f"type(ns) = {type(ns)}")
[[ 0.49671415 -0.1382643   0.64768854  1.52302986]]
(1, 4)
ns = 1.7334827235012424
type(ns) = <class 'numpy.float64'>

oh, thank you. the problem was indeed the “keepdims”. It works now.

Hi,
I’m getting similar error that I can’t solve… I checked all bugs in this topic and couldn’t find any in my code. Can someone please help?

Error:
There is a mistake in the backward propagation! difference = 0.9999999999999866

AssertionError Traceback (most recent call last)
in
6 expected_values = [0.2850931567761623, 1.1890913024229996e-07]
7 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”
----> 8 assert np.any(np.isclose(difference, expected_values)), “Wrong value. It is not one of the expected values”

AssertionError: Wrong value. It is not one of the expected values

1 Like

Check your implementation of this formula more carefully:

diff = \displaystyle \frac {||grad - gradapprox||}{||grad|| + ||gradapprox||}

Are you sure you didn’t use addition in the numerator?

Yes, I used the same formula both in gradient_check() and gradient_check_n(). gradient_check() test passes successfully, and in gradient_check_n() it fails…

1 Like

Ok, then the mistake is elsewhere in gradient_check_n. There are plenty of other mistakes you could have made. The next one is to check how you implement the "bump by \epsilon" logic: are you sure you didn’t apply it to all elements of \theta at once, rather than just one? That would be a critical difference between the 1D and the multidimensional case.