DLS C2 W1 Assignment Assertion Error

I’m getting this Assertion Error in the C2 W1 Assertion Error Exercise 4.

There is a mistake in the backward propagation! difference = 0.9999999999999866
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-50-c57ee5e9e05a> in <module>
      6 expected_values = [0.2850931567761623, 1.1890913024229996e-07]
      7 assert not(type(difference) == np.ndarray), "You are not using np.linalg.norm for numerator or denominator"
----> 8 assert np.any(np.isclose(difference, expected_values)), "Wrong value. It is not one of the expected values"

AssertionError: Wrong value. It is not one of the expected values

I’ve triple checked the code, and even reviewed the lectures to see where I might be going wrong, but I wasn’t able to pinpoint exactly where I went wrong as all the formulae were correct.

Can someone help me understand the meaning of this error, and where I might be going wrong?

Please click my name and message your notebook as an attachment.

It looks like “difference” is very close to 1, but not 1. In this routine, “grad” is constant. So, most likely gradapprox is really small value. One potential case is “epsilon” is used for multiplication, not division. Then, gradapprox goes to a very small value, and “difference” becomes close to 1.

Of course, once balaji look at the code, balaji can easily find the bug… But, the above is my guess from your output provided.

@Utkarsh2707

Here are some hints to fix your code:

  1. In backward_propagation_n, calculations for dW2 and db1 are incorrect.
  2. In gradient_check_n, grad_approx[i] is incorrect. Please read this link on operator precedence. Here’s an example: 100 / 10 * 2 will result in 20 and not 5.
1 Like

@balaji.ambresh thank you for the hint about the operator precedence! I now see where I was going wrong.
However, the calculations for dW2 and db1 were supposed to be wrong as the gradient check was supposed to be wrong.

Thank you so much for your help!