DLS2 Week 1, Assignment 3, gradient_check_n subtle difference

Hello,

I believe I implemented the functions correctly as the auto-grader didn’t pick up any mistake. However I got stuck for 1 hour on the last function because of the following weird result which I fail to understand :

There is a mistake in the backward propagation! difference = 0.24389144784740008

---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
in
** 6 expected_values = [0.2850931567761623, 1.1890913024229996e-07]**
** 7 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”**
----> 8 assert np.any(np.isclose(difference, expected_values)), "Wrong value. It is not one of the expected values"

AssertionError: Wrong value. It is not one of the expected values

However, this is because the difference I compute is 0.24389144784740008 and the value with which the assert is done is 0.2850931567761623.

Now after correcting the errors in dW2 and db2, I compute again and this time without apparent mistake :

Your backward propagation works perfectly fine! difference = 1.1890912740685776e-07

However this time also the value is not the one given by the notebook, although very close (the real expected value is “1.1890913024229996e-07” in the notebook, mine is very slightly different).

I searched for more than one hour where that little difference could come from without any success, so I decided to submit the assignment anyway for which the auto grader gave me full mark. But I still fail to understand where that difference comes from, especially between 0.24 and 0.28 which previously gave me an assertion error message (the one in bold in the first example)

Best regards

Hi, @will06.

Sorry for the late reply :sweat:

Let me know if you still need help with this.

Exactly same issue here. Can anyone help?

Hi @James_Wang ,

Please post a fresh query with error traceback to help with the diagnoses.

For anyone else who sees this post, James made a mistake which produces exactly the same incorrect results, so you can find what is probably this answer on this other newer thread.