Week1 Programming Assignment: Gradient Checking

Iam getting this error:

AssertionError Traceback (most recent call last)
in
6 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”
7
----> 8 gradient_check_n_test(gradient_check_n, parameters, gradients, X, Y)

~/work/release/W1A3/public_tests.py in gradient_check_n_test(target, parameters, gradients, X, Y)
56 ]
although I have used np.linalg.norm in my program

Compare gradapprox to backward propagation gradients by computing difference.

# (approx. 1 line)
numerator =      np.linalg.norm(grad-gradapprox)                            # Step 1'
denominator =    np.linalg.norm(grad)+np.linalg.norm(gradapprox)            # Step 2'
difference =     numerator/denominator                                      # Step 3'

me too and I don’t know why

Hi, @rmsrilatha and @ken25.

Please see if this post helps. Good luck :slight_smile:

Did you find any solution?

That post didnt really help! I am still stuck

Hi @blackfeather,

I think @nramon helped out in the other thread, right?

Happy learning!

1 Like

@nramon
@neurogeek

Hi guys I am having the same problem and none of the posts mentioned are very helpful.

my code is the same as @rmsrilatha

I followed all the linked threads and still couldn’t figure out what I’m missing. I did use np.linalg.norm, and checked that X and Y are capitalized. My error codes are:

There is a mistake in the backward propagation! difference = 0.955106072623136

AssertionError Traceback (most recent call last)
in
6 expected_values = [0.2850931567761623, 1.1890913024229996e-07]
7 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”
----> 8 assert np.any(np.isclose(difference, expected_values)), “Wrong value. It is not one of the expected values”

AssertionError: Wrong value. It is not one of the expected values

Hi,
Did you fix this issue ?
As “grad” does not change during gradient_check_(), the problem is “gradapprox” side. In your case, “difference” is close to 1. With looking at the formula, if “gradapprox” is really close to 0, or is far larger than “grad”, “difference” could be very close to 1. As both “forward_propagation” and “back_propagation” are provided, the problem should be in calculating “gradapprox”.

I don’t see an error in my gradapprox formula: gradapprox[i]=np.sum(J_plus[i]-J_minus[i])/(2*epsilon)

should backward_propagation_n be used in place of grad in the difference calculations?

backward_propagation_n would have needed cache as an argument but in the calculation of J_plus[i] and J_minus[i] the assignment instructions says to use “_” in place of cache.

You need to use “forward_propagation_n”, since this is N-dimensional operation.
As you see, “cost” and “cache” are returned from “forward_propagation_n”. In Python, “_” (underscore) means, “does not care”… :slight_smile: So, we ignore 2nd parameter, which is “cache”. As you see, it is not used in this scope. (you can set dummy variable instead, of course. But, it is not used anyway.)
So, please use “forward_propagation_n”

By the way, you use np.sum for a calculation of J_plus[i]-J_minus[i]. Both J_plus[i] and J_minus[i] are 1 dimensional array. We just need to focus on the "i"th location of gradapprox[ ]. So, np.sum is not required. (Actually, the result is same, though…)