Gradient Checking Error

I am currently having this error and I would appreciate some help.
I have confirmed the correct usage of np.linalg.norm, I have also confirmed correct arithmetic operations in the numerator and denominator. Kindly help me

Error in text
> There is a mistake in the backward propagation! difference = 1.0
> ---------------------------------------------------------------------------
> AssertionError Traceback (most recent call last)
> in
> 6 expected_values = [0.2850931567761623, 1.1890913024229996e-07]
> 7 assert not(type(difference) == np.ndarray), "You are not using np.linalg.norm for numerator or denominator"
> ----> 8 assert np.any(np.isclose(difference, expected_values)), "Wrong value. It is not one of the expected values"
**> **
> AssertionError: Wrong value. It is not one of the expected values

Those are the values expected for difference before and after the ‘fix.’ Yours is neither of those, hence the assertion fail. A clue might be that your value is exactly 1.0 which suggests numerator and denominator of a ratio are the same. Maybe trace back to how those values were produced and let us know what you find?

1 Like

This is what I got when I didn’t wrap my parameters_values by np.copy and used grad[I] for computing the numerator and the denominator. Thank you for your prompt response. I sincerely appreciate

I’m not sure if your last reply indicates you resolved the problem, or still have it but are getting closer. If the latter, now that you get a floating point number not equal to 1.0, I would recommend go back through the algebra and make sure you are correctly implementing the guidelines. Check for inadvertent cut and paste duplication, using ‘+’ and ‘-’ in the correct places, and operator precedence with nested ().

By the way, using the forum search I found someone else reporting difference = 0.333… and they found a cut and paste error within expressions for J_plus = and J_minus = where they were both using ‘+’. Hope this helps.

It worked already. The problem was subsetting I for my grad approx values. Thanks so much for helping me out. I am very grateful