Hi,

I can’t find the bug in my code. I already fixed the two mistakes( *2, 4.) that were in the backward_propagation_n function, in the gradient_check_n function :

- I didn’t confuse between theta plus and theta minus
- To calculate the nominator and the denominator without [i], i noticed that the calculation is outside the loop.

What am I missing? where is the mistake?

Thanks

Please be aware that no-one else can see your notebooks, so we can’t debug it for you. There are lots of potential things that could be wrong. We’re not supposed to share code in the public forums here, so perhaps you can show us the error output you get. Maybe that will give a clue.

A few other things that come to mind as possible errors:

Are you sure that when you modify *theta_plus* and *theta_minus* that you only change one element at a time on each iteration?

Also check your “order of operations” on the computation that involves 2\epsilon in the denominator. Try the following and watch what happens:

*m = 5.*

*x = 1 / 2. * m*

*y = 1 / (2. * m)*

If you’re expecting x and y to be equal, you’re in for a surprise!

Hi,

Sorry for the delay.

This is the error -

There is a mistake in the backward propagation! difference = 0.7283735199673291

AssertionError Traceback (most recent call last)

in

6 expected_values = [0.2850931567761623, 1.1890913024229996e-07]

7 assert not(type(difference) == np.ndarray), “You are not using np.linalg.norm for numerator or denominator”

----> 8 assert np.any(np.isclose(difference, expected_values)), “Wrong value. It is not one of the expected values”

AssertionError: Wrong value. It is not one of the expected values

Interesting. I’ve never seen that particular error value before. Did you fix the intentional errors that they added in the back propagation function? Maybe you only fixed one of them: note that there are two. Or if you left the errors in place, you should get the 0.285… value as the difference.

If the above doesn’t help, please check your DMs for some other alternatives.